Summary
I'll discuss some of the ways in which products are targeted towards potential buyers with benefits and drawbacks. There's also some talk about the future directions of advertising.
How are particular groups targeted now?
1. No targeting
The most obvious answer is that in many cases they're not. Adverts are simply put up and hope to hit at some point. The benefit is that you'll reach most people but the drawback is that this method is that a lot of people who have no interest whatsoever will also be reached. Even with Internet communications being cheap, this fact can still result in wasted effort and money.
2. Demographics
Traditionally, marketers try to understand their customers (potential or current) by grouping them, primarily using demographics. This works on the assumption that members of each group have similar predictable characteristics (they have similar habits and buy similar things). In some ways, this is almost a type-theory; and these are not widely accepted in contemporary psychology. The benefits are that doing this does increase conversions over zero-targeting and groups can be readily identified once a customer's information has been provided. The drawback is that getting this information is hard - companies are often willing to pay a lot of money for this type of data about a market. The more specifically a product can be targeted, the greater the chance of conversion; but to target more specifically requires an increasing amount of information about potential buyers. This can be hard, particularly as a lot of the information might not be obvious or accessible in large scale survey type research.
3. Stated interests
Potential buyers might be asked to state what their preferences are. This can help filter out irrelevant advertising materials and focus on relevant ones thus leading to increased conversions. The drawbacks are that it can be hard to get this information and such information just not be true.
4. Purchase history
Another form of prediction by groups is by using prior purchases. The theory is that if two people buy product A and the first person also bought product B, then the second person is more likely to buy product B than someone who didn't buy A. The advantage is that it requires no demographic information about buyers, and that it increases conversions to a level above chance. The disadvantages are that it relies upon the assumption of types of people; the data analysis required is quite heavy (often with very large sets); the difficulty in getting the data in the first place; and that it doesn't account for transience - trends or influences that act upon large groups of people for short periods of time.
5. Contextual adverts
Google popularised the contextual advert. These relied upon analysing the text of whatever page or search query a person entered and produced semantically similar adverts in response. These are held to have high levels of conversions due to increased relevance but they fail in one aspect: they only take into account context but not the person behind the context. "We are what we search for" is not enough because many searches are for things that do not define us but rather meet temporary or one-off information needs.
6. Social network
Facebook recently announced its intention to use the social graph to target advertisements. The theory is that what your friends buy is what you will want to buy. This may work but may not and ignores the realities of social networking. Many people use social networks with families (do I really want to buy what my daughter or my father want?), for professional promotion (so a recruiter I've chatted with once bought a particular car. That will influence my decision not at all) or people we have non-permanent relationships with (I can 'friend' people I haven't seen since primary school. How do their purchasing decisions relate to mine?).
These methods offer a partial solution. If done well, they will increase conversions which makes it easy to be complacent, maybe even think that the problem is solved as well as it possibly can be. The most effective advertising and marketing units will use a range of methods that complement each other to provide as full a coverage as possible.
But if we explore newer techniques, we might find other ways to complement
Trade-offs
There appears to be a trade-off. To a better chance of a sale from an advert, it has to be targeted more specifically. But to be targeted more specifically requires data gathering and analysis a priori to advertising.
But these methods still don't address the person behind the advert. People are either unconsidered/treated as uniform members of a group (which is successful but could be greatly improved) or have to provide data about themselves up-front. The only exception is the contextual advert which takes no account of the person but rather their current information need.
Future methods
The holy grail of advertising is to produce a method that takes into account not just context but also the person behind it in a way that requires as little a prior data gathering and analysis as possible, preferably none. These data must also be honest: the possibility of potential buyers giving misleading information should be low. Finally, the data must be obtained with the permission of the potential buyer. Not having this permission could backfire and turn a potential customer into one who refuses to do business.
So, where?
This leads to the question: where can we get such information?
The information sources must:
- Be made publicly available with users' permission preferably express
- Be retrieved for low or effectively zero cost
- Be about a single person
- Give a description of a person at the personal level
- Give some indication of the person's current context
- Provide a degree of authenticity
The solution
One answer is social media. Facebook, Twitter, and GooglePlus all offer information that is (often) publicly available and highlights the concerns of interest to an individual at a personal and contextual level. If a matter was not relevant to someone, why would they write about it?
But this information is hard to analyse. There's no neat forms with precise Likert scales, no specifically expressed interests and the like. It's plain, natural text written to be understood by other humans. It needs more preparation than most companies can put into a single potential sale before it can be analysed. Methods such as human-performed content analysis can categorise propositions according to set criteria and this information is gold. Human methods, however, don't scale very well.
But there is hope. Methods within artificial intelligence, specifically natural language processing, can analyse such text within a representation or 'map' of language. From this, we can see how closely related two pieces of text are. Or, in other words, we can relate someone's Facebook posts to a range of product descriptions.
Using natural language processing techniques can help you understand how similar two pieces of text are: one being a person's social media posts and the other being a range of products or services. The assumption is that the more similar an advert is to someone's social media, the higher relevance it will have to that person. This means higher conversions and greater sales.
There are many methods within natural language processing to estimate relevance. Google themselves use a system that was (and may still be) reliant upon Wordnet, a formal ontology of words and how they relate to each other.
Text analysis is used to identify topics within text and sentiment analysis is used to understand general feelings or attitudes towards something.
At Roistr, we use a combination of methods to understand the underlying meanings of documents. These are used to gauge the proximity of two or more documents from which we infer relevance. If two pieces of text are close then they are similar in meaning.
Evidence
We're doing an experiment soon using Amazon's top ten best sellers and asking people with a Twitter account to rate the 10 books on a scale of how much the book interests them personally. We will then take each person's public Tweets and rate them using our semantic relevance engine. The two sets of results will be compared to each other: individual human judgements compared to our personalised advertising doing the same thing using the individual's tweets.
Hopefully, this will give us some numbers to help us see whether it's possible to predict the most relevant product from a person's tweets. The experiment will be released soon and we will publish the results both here and in a white paper that will be free to download.
But this information is hard to analyse. There's no neat forms with precise Likert scales, no specifically expressed interests and the like. It's plain, natural text written to be understood by other humans. It needs more preparation than most companies can put into a single potential sale before it can be analysed. Methods such as human-performed content analysis can categorise propositions according to set criteria and this information is gold. Human methods, however, don't scale very well.
But there is hope. Methods within artificial intelligence, specifically natural language processing, can analyse such text within a representation or 'map' of language. From this, we can see how closely related two pieces of text are. Or, in other words, we can relate someone's Facebook posts to a range of product descriptions.
Using natural language processing techniques can help you understand how similar two pieces of text are: one being a person's social media posts and the other being a range of products or services. The assumption is that the more similar an advert is to someone's social media, the higher relevance it will have to that person. This means higher conversions and greater sales.
There are many methods within natural language processing to estimate relevance. Google themselves use a system that was (and may still be) reliant upon Wordnet, a formal ontology of words and how they relate to each other.
Text analysis is used to identify topics within text and sentiment analysis is used to understand general feelings or attitudes towards something.
At Roistr, we use a combination of methods to understand the underlying meanings of documents. These are used to gauge the proximity of two or more documents from which we infer relevance. If two pieces of text are close then they are similar in meaning.
Evidence
We're doing an experiment soon using Amazon's top ten best sellers and asking people with a Twitter account to rate the 10 books on a scale of how much the book interests them personally. We will then take each person's public Tweets and rate them using our semantic relevance engine. The two sets of results will be compared to each other: individual human judgements compared to our personalised advertising doing the same thing using the individual's tweets.
Hopefully, this will give us some numbers to help us see whether it's possible to predict the most relevant product from a person's tweets. The experiment will be released soon and we will publish the results both here and in a white paper that will be free to download.
No comments:
Post a Comment