Programmatic Pants

Clay Schouest is the Chief Strategy Officer of Carat APAC.

3 million years’ worth of human development and the spirit of inquisitiveness had been reduced to which colour underpants Wayne would select that morning.

5 – 7 minute read


 

It’s 2050. Advertising as an industry has long been dead. That’s because choice, preference and conscious decision making is no longer a practiced human activity. It’s been replaced by the complete automation of our lives.

The last day of the last active choice was registered on the 10th December 2049 by Wayne Clayton in Singapore. The automation of decision making had reached the point where everything in life was managed through machine-based algorithms. By 2030 everyone on the planet, save for a few self-proclaimed luddites, had already fully embraced this automated life.

Wayne also practiced life automation apart from one single conscious decision – he reserved the right to choose his underpants every morning. This was his sole act of self-expression and his only remaining mechanism of protecting his individual identity.  His right to choose was not an act of political defiance, rather it was his way of reminding himself that he was still part human. He treasured his multi-coloured, multi-patterned pants collection in the same way an art collector treasures the uniqueness of craft in today’s context.

But on that 10th of December morning Wayne woke up feeling tired and slightly hungover from the previous night’s soma binge. He decided he didn’t have the energy nor the straight sight to choose his pants. Upon waking up and realising he would probably be late for work, he rolled over and capitulated the last act of human free will. “Alexa”, he croaked, “please select my pants”. And that was how the last active choice on earth was given up to an algorithm. That is how programmatic conquered pants and with it, the last choice in the world.

That is how programmatic conquered pants and with it, the last choice in the world.

The futurist Ray Kurtzweil’s dream had finally become reality.

So how did we get here? Most experts trace back the point of departure to the last ever Olympic games in Tokyo 2020 – this was the first occurrence of real time deterministic data combined with the known genetic code of each individual athlete to accurately predict the outcome of every sporting event. The IBM Watson’s algorithm was so accurate that it was not only 100% correct in predicting the Gold, Silver and Bronze medallists, but also the exact placement of the remaining athletes. The act was hailed as a breakthrough in predictive technology.

But in 2020 life automation was still in its infancy. Artificial intelligence and machine learning was still largely restricted to the automation of industries and large scale projects such as IBM Watson. Its exponential growth would soon gradually start prevailing in all aspects of life thereafter.

Initially the promise of machine-learning-based algorithms was celebrated as an ingenious way to improve life and a testament to the spirit of human progress.  In the beginning the algorithms offered limitless choice. In a world where everything was connected and predictive, the possibilities to enhance life seemed endless.

The use of algorithms in advertising provided the ability to customise exact offers and brand messages specified to meet the exact needs of the know individuals’ preferences, habits and predilections. It was a marvel of prediction. In fact, algorithmic advertising had become so accurate that it was able to predict and stimulate behavioural consumption demand before individuals actually knew themselves.

The use of algorithms in advertising provided the ability to customise exact offers and brand messages specified to meet the exact needs of the know individuals’ preferences, habits and predilections.

Gradually by 2030, machines had become so intelligent and predictive that choice had begun to reduce and narrow. The advertising industry was the first affected by this reduction of choice. The algorithms had become so smart at predicting and stimulating future demand that people started to become less inquisitiveness, less spontaneous and more passive. It was more convenient to let algorithms make choices for them. The result was a self-perpetuating bubble whereby the machines started narrowing down choices and reducing randomness based upon its excellent predictive optimisation techniques. Like abandoned gold mines of the old west, advertising and marketing as industries ceased to be productive and were shut down completely.

This narrowing down of choice was the beginning of Singularity.

At this stage not all were happy with the increasing automation of life– artists and religious leaders being particularly sensitive. By 2040 these two seemingly incongruous groups had formed a strange bedfellow’s alliance called the “Spirit of Randomness”.  They argued for spontaneity and inquisitiveness and practiced random acts of kindness as demonstrations of the human spirit.  But things turned dark and the alliance was eventual disbanded after the machine learning intelligence agency predicted and prevented random acts of terrorism that had been planned by the group to hack and sabotage the world’s central neural computing network system.

The automation of life had become so predictive and optimised that almost no choice remained. The sheer accuracy of predictive optimisation had reduced choice to a single point.

By 2045 the writing was on the wall. The automation of life had become so predictive and optimised that almost no choice remained. The sheer accuracy of predictive optimisation had reduced choice to a single point. The repercussions from this was a sea of sameness. Almost everyone wore exactly the same clothes, had exactly the same driverless car model, lived in exactly the same apartment layout, listened to exactly the same music and worked in exactly the same industry – mining solar energy to help fuel the ever-increasing power needs of the super computers.

And by 2049 it was only Wayne left with his single conscious choice of which pants to wear in the morning – 3 million years’ worth of human development and the spirit of inquisitiveness had been reduced to which colour briefs Wayne would select that morning… The end of self-conscious thought and choice happened not with a bang, but with a morning fart and a whimper.

Advertisements

Fast-track Facade

Christine Liu is the assistant insights manager for Carat APAC.

Imagining what’s to come for makeup junkies in quest of progressively quickened paths-to-purchase.

5 minute read



Not long ago, I uncovered a stack of Seventeen magazines from my teenhood: My prized possessions and authoritative beauty bibles from a decade ago! Every issue stood for each excruciating monthly wait for fresh content on the latest and greatest in cosmetics, for which I was willing to fork out from my allowance to receive.

A quick flip through the issues hit me with another realisation of how patient I used to be with getting dibs on makeup: There were dog-eared articles and ads introducing new makeup products, bookmarked until I happened to get into a physical store to test and buy them.

1
Blast from the past! Back when product smeared on pages was enough to pique my desire to go to a store to try it out.

What a stark contrast to the magazine-eschewing, tech-reliant millennial I’ve grown into. It has become second nature to do a zippy self-source on Google for trending lipsticks, click on the image search tab for real-life user-contributed previews of my chosen lipstick, go straight to Sephora.com to verify my choice with others’ reviews, and cart out my selection immediately to receive it within the same day. I can’t help turning testy when it takes longer, or if a cosmetic brand I’m interested in is absent from the WWW, as if they’d much prefer swatting away my outstretched hand of cash.

RAPID ROUGE
It’s not just me: My hastened tendencies are habitual of most Under-30s, especially when it comes to beauty products: Euromonitor reports that digital channels have been essential to strong global growth of the cosmetics category, driven chiefly by the young, digitally-savvy demographics embracing makeup as an online hobby [1]. They are flexing their wallets obligingly for the clever brands who are paying attention to their demand for hyper-accelerated beauty discoveries and purchases in the digital arena.

2
Fast-forward 10 years: Tapping through video makeup tutorials and swatches on Instagram stories are a daily habit for me, and I’ve grown to expect nothing slower than a click/swipe to purchase featured products
3
One lip, a million lipsticks: It’s common to see millennials fulfilling their love for makeup online, readily sharing their product reviews and gargantuan makeup collections on social media. There’s never enough makeup for a crowd that’s fed with new shades and formulas daily, at relatively inexpensive prices.

Adding to that, the makeup industry is characterised by a proliferation of choice for the consumer’s incessant hunt for their ultimate “holy grail” products. The return to consideration after purchase could take mere minutes, enough time to swipe on a new bullet of lipstick and realise it looks different on your skin than it did in other people’s Instagram swatches. (Bummer.) The speed that digital platforms and innovations can offer to such a category is a godsend, and it’s a rare vertical where consumers have adopted and embraced futuristic upheavals so quickly and easily.

7
L’Oreal’s AR Makeup Genius app.

Live face filters powered by augmented reality have been a gladly-received response to this demand for swiftness, especially in the trial stage which is typically most time-consuming and what handicaps  the makeup purchase journey. L’Oreal launched a mobile app called Makeup Genius, which scans the consumers’ faces to impose a realistic, live reflection of what a wide range of L’Oreal’s products looks like on their own skin, as if the phone is a mirror. Sampling cosmetics can now be done anytime and anywhere, with the additional aid of product recommendations, and then bought instantly, all in one platform and sitting.

5
LOOKS by LINE facilitates the virtual trial of products from numerous hot beauty brands such as 3CE, Clinique and Etude House in a single app.

LOOKS by chat app LINE takes this one step further, enabling multi/cross-brand recommendations, trials and purchases, and Shiseido Japan’s Telebeauty face filters can be shown real-time during Skype calls, which presents opportunities for quick feedback from friends.

10
Maybelline/Garnier X Grab’s beauty bar on-the-move.

Of course, being able to test-drive the actual cosmetic before buying it would take the cake. Maybelline Singapore took a cue from that literally, teaming up with Grab to install beauty stations inside GrabShare cars, allowing consumers to order a ride where they can test products while getting to the next destination. Perfect for touch-ups on makeup-melting hot days, and solving the inconvenience of finding time for testing trips to physical stores.

PRIMPING PROSPECTS
It’s hard to see this need for speed slowing down anytime soon. Let’s peer into the marketing crystal ball and have a go at charting where this accelerated purchase journey is headed in 10 years.

Automated advertising that predict our product preferences is currently based on past purchases, but imagine that in 10 years it may be established from our actual usage instead, because the physical product can be tracked. Manufacturers are likely to realise that for cosmetics, the frequency of post-purchase use is a better indicator of re-purchasing than just the order itself.

11
A “smart hairbrush” from Kerastase that diagnoses your hair health, suggests personalised care tips, and offers real-time product recommendations.

For example, foundation bottles may be equipped with sensors to monitor usage and formula preferences, but disguised for consumers as a means for analysing their skin type or health, so as not to freak them out (L’Oreal’s Kerastase is already headed in that direction).

Also, with all minds on virtual reality and constant progress in that space, perhaps the technology will advance into a full sensory experience. Brands may just be able to stimulate textures and smell, shortening and improving the trial of cosmetics even further. Visualise being able to feel the effectiveness of a lip balm on demand—And hygienically too.

But hold up: As social parameters rapidly redefine, and technology permeates all forms of communication (think vacating offices and relying on video calls to connect, getting information from chatbots instead of salespeople, the future of social media as interacting with avatars in VR instead of a webpage), will we even need to wear physical makeup in the future, if no one will see our bare faces any longer? Perhaps we will be buying customised face filters and makeup for our VR avatars instead, which fits into the projected trend of accelerated purchasing of makeup—All it takes is a click to apply a full face of makeup to my cartoon self!

Perhaps we will be buying customised face filters and makeup for our VR avatars instead.

Obviously, the aid of technology can be a dream or a disaster. Delving into tech innovations just to be part of the industry trend, without careful consideration of whether it will improve the customer’s experience isn’t just a waste of resources, but can make a brand come off as gimmicky. Being a first mover may be tempting an accolade, but it should be less of a priority than making sure these visionary contraptions meet an apparent consumer want or need.


References:

[1] Euromonitor Passport Beauty Survey: Evolution of Beauty Routine Becomes a Key Innovation Driver, January 2017

Photos:

Magazine article picture: Harper’s Bazaar 2014. Taken from https://rmsbeautyblog.files.wordpress.com/2014/08/rms-beauty_harpers-bazaar_september-2014.jpg

L’Oreal Makeup Genius picture: Taken from https://hautemakeup.wordpress.com/2014/08/07/loreal-makeup-genius/

Maybelline X Grab photo: Taken from @thesmartlocalsg on Instagram

L’Oreal Smart Brush picture: Taken from http://www.adweek.com/digital/loreal-made-smart-hairbrush-analyzes-beauty-habits-and-suggests-products-175350/

 

Artificial Intelligence: The Age of the Many, of Mediocrity… And the Realisation That Bob Marley Could Still Be Alive

Jasper Distel is the Regional Associate Strategy Director for Carat APAC.

Essentially, I still see advertising as an arty-farty, craft industry, not an automated factory. It’s humans that create and inspire; however, it’s undeniable that technology is here to help us do our jobs ‘better’. But what will that mean in the long run?

8-10 minute read


 

We could all be living in an alternative Matrix-esque reality where Bob Marley is still alive and singing—at least I seriously hope so because today’s music needs a bit of a “Redemption” from its present banality. That random thought is what I woke up with, along with a dry mouth and slight hangover, from the previous night’s beer-fuelled, marathon discussion.

During a recent trip to Korea, I had the pleasure of meeting some really interesting tech peeps. We then decided to continue the evening at a Korean-Mexican restaurant…because that’s what you eat when you’re in Korea. After at least six a couple of drinks, we started deliberating over Artificial Intelligence…as one does during a night out.

For those of you who do not know what Artificial Intelligence is about exactly, don’t worry. It’s just something that will most likely take over your job in the next few years. And it’s HUGE. So huge, it’s true. Believe me. There is a reason they’ve already labelled it as the next industrial revolution. According to ‘specialists,’ AI will have a bigger impact than the previous three revolutions combined. For now, all you need to know is that Artificial Intelligence is H-O-T as hell and will have an impact on everything we do. Impact is not the right word—it will redesign entire industries, from the ground up. I can hear you thinking, “Jasper, stop with that futuristic Terminator mumbo jumbo bullshit”. Fair point; so let’s get back to Bob Marley and his current whereabouts.

 

Our geek-out session had reached a lull and was just on the verge of dying off when—as if on cue—we received a redemption in the form of pop star Hatsune Miku when one of her songs started pumping through the restaurant sound system. Our conversation was immediately revived and took on a new dimension.

After one of my new techy friends told me more about her, I was transfixed by the potential she represents. Never mind that her music is actually pretty shit, Hatsune Miku is possibly the best thing I’ve heard about in a long time. How the hell could I have missed something this big and significant? I felt like Cersei Lannister during her walk of atonement.

For everyone who doesn’t know who Hatsune Miku is (and I truly hope I’m not the only ignorant one) she is a 16-year-old pop star from Sapporo, Japan. Hatsune has impossibly long, turquoise hair held in place by two pigtails and big, bright blue eyes that seem to take up half her face. She started her career in 2007, opened for Lady Gaga and partnered with Pharrell Williams in 2014, and sold out ten major US venues during her 2016 world tour. She has millions of fans around the globe and has already produced over 100,000 songs. Yes—you read that correctly—100,000 songs in only 10 years. That’s an incredibly high rate of productivity. One could say she has a machinelike efficiency…and one wouldn’t be wrong in thinking that.

That’s because she isn’t real. Did I forget to mention that? Well she’s not real in the flesh and blood sense of being ‘real’. She’s actually a piece of code with an avatar and a voice that is completely programmed like a synthetic instrument, processed and smoothed by algorithms. Think Max Headroom for the digital economy…on steroids.

9b4355a173faeb500ab1e837cf4f34c6dfbb2422_tn647x298

We know AI can already recognize emotions based on people’s facial expressions and general body language, as well as discern what songs will be popular among an audience. Hatsune, as a non-human, can capitalize on machine learning, adapting as new data comes along, even without being explicitly programmed.

Imagine being at one of her concerts and halfway through the show, she scans the crowd and realizes that they’re not 100% feeling it. On the spot she will be able to change her repertoire, based on machine learning, and get people excited again. Yep, machines toying with your emotions…live.

This is when Hatsune becomes more ‘real’ and brings up philosophical questions about what actually is real. Music has always been about the artist conveying and evoking emotions from an audience. As a Nirvana fan, it’s their raw, somewhat unbridled emotion that stirs the human spirit inside me. I love the emotional rollercoaster; it makes me feel alive. If Hatsune’s music stirs an emotion in her audience and if she can interact with and alter those emotions, then isn’t that real?

Music has always been about the artist conveying and evoking emotions from an audience. If Hatsune’s music stirs an emotion in her audience and if she can interact with and alter those emotions, then isn’t that real?

This is why I was so mesmerised: Could we programme a Bob Marley AI-fuelled avatar and get him to start producing music again? He would compose new and original songs and hold concerts—and he would be real because he would be capable of interacting with audiences and evoking emotions from them.

Call me old school, but something about this thought disappoints me. And it starts a bigger conversation about the impact technology has on different industries, like our own. Essentially, I still see advertising as an arty-farty, craft industry, not an automated factory. It’s humans that create and inspire; however, it’s undeniable that technology is here to help us do our jobs ‘better’.

Many agencies have already been experimenting with facial recognition and deep learning in digital OOH screens. In 2015, Posterscope experimented with a fictional coffee brand in ‘the world’s first ever artificially intelligent poster campaign’. These bus stop ads could read the reactions of its audience and adapt its artwork and ad copy accordingly over time. Actually, not that much time was needed; in less than 72 hours, the campaign was creating posters in line with what’s considered current best practice in the advertising industry—insights that took decades of human trial and error.

In less than 72 hours, the AI-aided campaign was creating posters in line with what’s considered current best practice in the advertising industry—insights that took decades of human trial and error.

In the long run, what effect might this have on the people responsible for ground-breaking, creative work?  Could algorithms and AI produce work of the same calibre as our creative, human geniuses? Will machines be able to deliver stories that amaze us, that make us laugh or cry? Possibly.

In the beginning, AI could provide—en masse—an artificial crutch that enhances the creativity of those who are less inclined. However, a tipping point may come about when we rely too heavily on the aid of algorithms at the expense of human creativity. Remember the pre-Google Maps era, when we still had to rely on paper maps to get from A to B? We had to think on our own and figure out the best route to arrive at where we wanted to be, if we arrived there at all.  Could a creativity-enhancing AI actually make everyone less creative and even more mediocre?

How might this play out over time? In my mind, this conjures up the somewhat strange metaphor of Fat Bastard from Austin Powers. Stay with me here: His head represents truly ground-breaking, innovative work done by a select few creative masters, while his body represents the newly AI-enhanced creative masses. In time, with the aid of AI, his belly will get bigger and bigger, while his head will remain the same, or may even shrink. Got that? Welcome to the age of Mass Mediocrity.

big_1471696031_image

Hatsune’s cheery pop song eventually ended, and I half expected a newly created Bob Marley tune to follow. It didn’t happen, at least not yet.  We finished our Korean-Mexican food and one of my lovely companions turned to me and said the magic words, “back to the future”.

 

Muchas gracias to Clay Schouest for connecting the dots and with the edits.

Retiring the Canary

Clay Schouest is the Chief Strategy Officer of Carat APAC.

Are we entering a ‘new norm’ for advertising where growth in ad spend and economic growth enter a new relationship? It seems we might be.

5 – 7 minute read


Global advertising spend has generally always acted as the canary in the coalmine for the general state of the global economy. When ad spend goes up so does economic growth, and when spend contracts so generally does the economy. This is the first year in many that global economic forecasts for growth are going up whilst the advertising growth rate is less than the previous year. To be exact, the overall economy is predicated to grow 3.6% in 2017 versus 3.1% actualised in 2016, whilst advertising growth rates are forecasted to be lower with a 3.5% increase in 2017 versus 4.1% actualised in 2016.

This scenario suggests a couple potential scenarios:

1.  That economic growth may be about to slow, or that advertising may actually exceed its initial forecast.

2.  That we are entering a ‘new norm’ for advertising where growth in ad spend and economic growth enter a new relationship.

Could it be that less advertising spend is required to achieve growth?

It’s the latter that I would like to explore. Perhaps we are starting to see a new norm for advertising. Could it be that less advertising spend is required to achieve growth? This would suggest advertising investment is becoming more effective at achieving like-to-like sales.

One could assume this may be down to the influence of the peer-to-peer economy. In other words, the power of fully-scaled social media whereby ‘earned’ media is becoming increasingly more influential on achieving sales.

It could also be attributed to the increasing shift toward addressable advertising—advertising that serve ads directly based on demographic, psychographic, or behavioral attributes associated with the consumer instead of projecting what content a particular audience will be consuming.

To use this Adweek example: Whether your target audience is watching TBS at 2 p.m. or ESPN at 10 p.m., they’ll be served your ad. And it is served to your target whether they’re a heavy or a light TV viewer because the ad finds the audience versus the old model of having the viewer come across the ad. And advertisers are only charged on impressions that are served to their target audience.

Over the past few years, the impact of new ad technology has increased the prevalence of addressable advertising with better, and more accurate targeting and optimisation that requires less advertising spend to achieve sales, and in turn reduces wastage.

The growth in advertising investment allocated to addressability has increased over 2,000%. Currently only 10% of all advertising is invested in this way – and analysts predict it will be exponential in the coming years (30% by 2018 and almost 50% in the US by 2020). The scalability of addressability is on our doorstep.

It could be attributed to the increasing shift toward addressable advertising with better, and more accurate targeting and optimisation that requires less advertising spend to achieve sales, and in turn reduces wastage.

Does addressability equate to better effectiveness? There is also evidence to suggest yes. In a recent study by the analytics group D2D the effectiveness of addressable advertising was measured versus a control group panel. Those plans that contained elements of addressability outperformed those that did not. The study concluded addressability done in the correct way can influence effectiveness by a factor of 10X. This will most likely increase as it becomes more scaled.

As more platforms and media goes addressable, traditional ‘fixed inventory’ advertising will become less and less. This coupled with the fact that people also consume less advertising and have more ad blocking technology installed might ultimately spell the death of advertising as we know it.

The creation and distribution of brand messages will be addressable and largely embedded in entertainment-based content or embedded in general utilitarian aspects of our daily lives. Reminders will pop up on our digital screens and household items – such as a message from our favourite yogurt brand on our digital fridge door, or a message about allergy relief medication relayed by Alexa in the morning when we ask for the day’s weather forecast.

It feels like the old adage of “half the money I spend on advertising is wasted; the trouble is I don’t know which half” may be coming to an end all together. Coming back full circle: are we entering a ‘new norm’ for advertising where growth in ad spend and economic growth enter a new relationship? It seems we might be.

 

FaceTime

Tam Le is the Regional Associate Strategy Director for Carat APAC.

This article explores the implications of facial recognition, both for advertising and for retail, as well as for future privacy rights.

7-10 minute read


Remember that scene in Minority Report where the protagonist, played by Tom Cruise, walks through a corridor of fully personalised brand advertisements from companies like Lexus and American Express and then ducks into a Gap where the virtual sales assistant greets him by name whilst inquiring about his previous purchases?

The-Minority-Report-for-Advertising-and-Tech

Well despite the film being set in 2054, 37 years in the future, we’re actually not far off from this reality today. Google’s FaceNet claims >99% accuracy in facial recognition; Facebook’s DeepFace, >97% accuracy [1]. And when you stop and think about how many pictures are uploaded of you onto Facebook (I’ve been on for over a decade and I’m not camera shy), the implications are overwhelming.

It is these implications of facial recognition that I want to explore in this article, both for advertising and for retail, as well as for future privacy rights.

 

ADVERTISING: I SEE YOU LOOKING AT ME

The first level of facial recognition (in order to gain public acceptance and normalize the idea) is to target messages and ads based on anonymized data like demographics (age, gender, etc.) or consumer behaviour.

GMC-9-12-3

Posterscope designed and executed a campaign for the GMC Acadia mid-size SUV that for the first time, linked responsive facial recognition technology to dynamic displays that presented personalized content in an out-of-home campaign [2]. The digital screens were fitted with video sensors and partner Quividi’s audience and context aware platform that anonymously detected and determined whether a passing shopper was a man or woman, alone or with a group, adult or child or even frowning or smiling. Once detection was made, the digital screens were populated with video content and brand messaging tailored to the identified audience. No data or images of any type were collected, stored or shared at any time, ensuring privacy.

womens-aid-billboard-hed-2015

In another facial recognition application, Women’s Aid was able to use the technology in a powerful way that underscored their message about domestic violence. Their digital billboards displayed an image of a bruised woman and recognized when people were paying attention to the ad. As more people looked at the ad, her bruises and cuts healed faster, communicating that if people took notice, domestic abuse could be halted [3].

In the future, as facial recognition in OOH becomes more publicly accepted, the displays can start collecting and storing more data on people (what display they pass, at what time, whether they go to a store shortly after viewing an OOH display with a strong sales message, etc.) and building profiles with a history of behaviours.

Fused with the data we will have of people’s online behaviours, this starts creating a more complete overview of the consumer journey, addressing the common concern of not being able to tie offline sales data with online ad impressions. This will also bring us closer to a singular universal ID for each person, which will contain their complete profile and track and retarget them both online and off.

 

RETAIL: I’M WHOEVER YOU WANT ME TO BE

Outside of pure advertising advantages, facial recognition would also have many implications for the retail experience as our physical and digital worlds merge.

screen-shot-2016-12-23-at-8-51-14-am

KFC, in partnership with Baidu, is already testing out personalization based on the data it gathers through facial recognition at one of its restaurants in Beijing [4]. Installed image recognition hardware scans customer faces, detects mood, gender and age and recommends menu items accordingly. For example, as Baidu claimed in a press release, the system would tell “a male customer in his early 20s” to order “a set meal of crispy chicken hamburger, roasted chicken wings and Coke for lunch,” while “a female customer in her 50s” would get a recommendation of  “porridge and soybean milk for breakfast.” The setup also has built-in recognition to provide better services for returning customers: it can “remember” their order history and suggest past favorites.

In my utopian view of the future, all McDonald’s would have a centralized global database of their customers’ faces  so they could remember, in any country, that I’m going to order either the Big Mac meal with an extra order of fries or the Double Cheeseburger meal if I’m just looking for a snack. Wouldn’t it be great if McDonald’s tracked me from childhood and suggested new menu items as my taste buds developed over time? Additionally, they would know when I reached major life milestones, like becoming a mother, once I start ordering Happy Meals to go with my Big Macs. They could literally see me grow up in front of their very own image recognition hardware, like some kind of distant relative that provides me with fast food.

Smart-Vend-Solutions-facial-recognition-vending-machine-in-use

In an even more maternal move, today the Luce X2 TouchTV vending machine can not only greet you by name and remember your past purchases, but it can also, with access to your medical records, deny you certain items, like a sugary snack if you have diabetes or items that may contain nuts if you have an allergy [5]. The system could also be connected to a retailer’s loyalty points system or linked to the room numbers in a hotel. The opportunities for receiving personalized service from this once faceless, impersonal machine, are endless.

korea_shopper

Let’s apply this to a larger retail space, say a supermarket of the future. Building off the concept of Tesco’s virtual subway stores in South Korea, we could use facial recognition to have the store “shelves” customize themselves for each person in a way that would make customers more likely to purchase and purchase more. I’m more likely to buy certain brands, items, and categories than you are, and vice versa. How can retailers exploit this and get the both of us to buy more?

For example, if I stepped up to a digital shelf in the milk “aisle”, it would only show me soy, coconut, or almond milk made with very few ingredients (the only types of milks I buy) whereas it could show someone else baby formula. But then the retailer also knows I have a sweet tooth so it would show me some cookies that would go great with milk, which I may not have bought in a physical store because the cookies are placed far from the milk.

Additionally, brands could pay to be in the prime front-and-center shelf space (which they do in physical stores), but do it in a cost-effective way, like only paying for a prime position on target consumers’ virtual shelves, instead of wasting money through prime placement on everyone’s shelves.

Facial recognition would also have many implications for the branding industry. With virtual shelves, brands can also cost effectively A/B test new packaging design in-situ and at scale to determine each option’s effectiveness or alter the claims shown on packaging per individual to tailor their message. This could mark the end of one-design-fits-all.

On top of all of these advances, facial recognition also offers the benefit of frictionless payment, like Alibaba’s Smile to Pay technology. And as we know from technological advances of the past, such as moving from cash to credit card, the further we move away from exchanging physical goods and the easier we make it to pay, the more people will spend.

You’re probably now wondering, if these new innovations allow for greater and greater marketing effectiveness, and it seems like we already have the technology to implement these ideas, what’s stopping us from getting to this advertisers’ utopia?

 

PRIVACY RIGHTS: YOU’RE CREEPING ME OUT

“We recognize the creepy, but we don’t want to stifle innovation. If we cross that line from cool to creepy, people will stop using that service,” recognizes Carl Szabo, a lawyer with NetChoice, a tech industry group that represents companies like Facebook, Google, and Yahoo [6].

I recognize not everyone is as excited as I am at the prospect of McDonald’s recognizing you and watching you grow up or store shelves rearranging themselves when you start approaching. In fact, even with today’s relatively limited use of facial recognition, a backlash is already starting.544184752_1280x720A couple examples of this come from artist and technologist Adam Harvey, who has designed both face camouflage and anti-surveillance clothing. The face camouflage, called computer vision dazzle (or CV dazzle), uses a strategic application of paint and hair-styling to throw-off patterns that facial recognition algorithms look for, such as the degree of light and dark in the cheekbones, or the way color is distributed on the nose bridge—a baseline amount of symmetry [7]. When CV dazzle is executed properly, it transforms a face into a mess of unremarkable pixels causing a momentary burst of confusion for the computer, allowing the wearer to go undetected. The anti-surveillance clothing, dubbed the Hyperface project, involves printing patterns on to clothing or textiles, which then appear to have multiple eyes, mouths and other features that a computer can interpret as a face, overwhelming facial recognition systems by presenting them with thousands of false hits so they can’t tell which faces are real [8].

As evidenced by these self-iniatives, many consumers don’t want to be tracked by cameras and their images sold by corporations like Google and Facebook to advertisers. The path to societal acceptance of facial recognition technology will have to be slow and transparent. Szabo, the lawyer with NetChoice [6] admits that “legislation cannot move at the speed of innovation,” and suggests companies make their facial recognition policies hyper transparent and explicit so that consumers can “vote with their feet” if they are “creeped out.”

“If we did start recognizing people en masse then I think outdoor would have the same problem that digital display is having with people getting fed up and install ad blockers. You can’t do something just because the technology is there, you have to be led by the consumer and not the technology,” says Chris Pelekanou, commercial director at Clear Channel, one of the world’s largest out-of-home advertising business [9]. This is something we as marketers must keep in mind as advertising is most effective when people embrace it.

 

This article was developed with much help from Ben Milne, Head of Innovation at Posterscope.

[1] http://fortune.com/2015/03/17/google-facenet-artificial-intelligence/
[2] http://pioneeringooh.com/responsive-facial-recognition-technology-redefines-customer-engagement/
[3] http://www.adweek.com/creativity/bruised-woman-billboard-heals-faster-more-passersby-look-her-163297/
[4] https://techcrunch.com/2016/12/23/baidu-and-kfcs-new-smart-restaurant-suggests-what-to-order-based-on-your-face/
[5] http://www.telegraph.co.uk/finance/newsbysector/retailandconsumer/11274179/The-vending-machine-of-the-future-is-here-and-it-knows-who-you-are.html
[6] https://news.vice.com/article/facial-recognition-technology-is-big-business-and-its-coming-for-you
[7] https://www.theatlantic.com/technology/archive/2014/07/makeup/374929/
[8] https://www.theguardian.com/technology/2017/jan/04/anti-surveillance-clothing-facial-recognition-hyperface
[9] https://www.theguardian.com/media-network/2016/aug/17/facial-recognition-a-powerful-ad-tool-or-privacy-nightmare