Post Disruption: Market research, what next?

Based on my personal experiences

Around 2011 the owner of the company that I then worked for came to Toronto and challenged the employees to go on social media. At that point, few had considered how important this was about to become for our jobs.

In the pre-historic days of market research, consumer data were collected by walking around the city, knocking on doors and interviewing people face to face. I caught the tail end of this when I started in market research in Russia (which is another story). Then, call centres became industry standard and most survey research was conducted on the phone. However, qualitative discussions were still held in person at focus group facilities.

In the 1990ies, online surveys took over as the main means of eliciting consumer data. Rather than finding consumers through random digit dialing, companies now started to build online panels of people who were willing to answer surveys. There was some concern about the representativeness of these panels at first, but it was soon put to rest when the considerable cost savings of this approach became apparent. Initially, research companies needed specific expertise, expensive software and sufficient server capacity to program, host and run these online panels and surveys. Quite a few even tried to develop their own software.

The 2000s saw a rise in self-serve online research solutions. The idea of software as a service (SaaS), and ‘freemium’ products such as SurveyMonkey made it possible for anyone to collect the data they wanted. It seemed no longer necessary to have any particular expertise or resources. At that time, I felt that many established market researchers were reluctant to open their eyes to the new reality. A few companies were blazing ahead, but the majority sat back and hoped it wouldn’t get too bad.

I realized myself around 2012 that things would never be the same again. When I had the opportunity to start my own business in 2014, I jumped with both feet into the world of SaaS, social media and startups, looking for ways to innovate and carve out a niche for myself in the insights community. I read Clayton Christensen and Eric Ries. I experimented with different tools and methodologies. I immersed myself into WeAreWearables Toronto, then a monthly event at the MaRS Discovery District, an innovation hub that offers services for startups and scaleups, and various other incubators and accelerators around the city.

Innovation was the buzzword, and also agile, real-time, lean and design thinking. The word disruption became popular in my neck of the woods a little later. Hard to believe that this was only six years ago. Now the word already seems a little tired. I feel that the dust of disruption has settled in market research, and firms have made adjustments.

Those who started off with the new technologies – for example Qualtrics, or the Canadian company Vision Critical – have probably done very well (not that I know their books). The traditional firms have incorporated various technologies into their offering. Many have social media analysis products. Some run large-scale customer feedback platforms for their clients. Some use virtual reality goggles for concept testing or shopper studies. Some offer online communities or hold virtual discussion groups. And to cut cost, many move business functions into the cloud, and outsourcing is widely used.

Whether this is sufficient to keep traditional market research organizations profitable, I don’t know. What I have learned in my own business is that I am still selling ATUs, segmentations, concept tests and one-on-one interviews. After the frenzy to be innovative and different, what clients seem to appreciate is my expertise. They trust me to know how market research is done properly, to execute the project for them, and to deliver them results that reflect a thorough understanding of their business questions, and in a form that adheres to their internal processes and requirements.

That’s why I stopped following the hype in the last couple of years. Disruption happened, but that was in the past. No amount of research automation can substitute understanding. And understand the customer we must.

But…I recently started renting a co-working desk at a business centre. I am with ease twenty years older than the majority of the other tenants there. So now I am back in a startup environment, and I see that while the hype has decreased, startups are here to stay for the foreseeable future. Businesses who are trying to disrupt various fields are numerous and, with the myriad applications of AI, far from done.

It will remain important to understand how new technologies fit into and change existing businesses. For example, I am intrigued by the anatomy of the Cloud, where servers are located, how data are moved around and what implications different factors have on data loading speeds, data security etc. When some people that I recruit for surveys complain about it taking longer than they thought, is that because their Internet is slow, or because the survey hosting company has switched from using their own servers to the cloud?

Data governance also interests me. With many market research companies using subcontractors, it is almost impossible to see all the way down the supply chain where data may be stored, processed or transferred to. The business risk related to data governance has increased exponentially for market research firms. Good times for lawyers and insurance companies.

But understanding AI, Cloud and computing in general is and will be immensely important for anyone interested in the affairs of this world. I think this is a topic that they should teach kids about at school, so that future citizens can make informed decisions about it. I want to learn more. So far, I have read three books on AI – it’s a start.

Barbara’s AI reading list:

  • Kartik Hosanagar: A Human’s Guide to Machine Intelligence
  • Ajay Agrawal, Joshua Gans, and Avi Goldfarb: Prediction Machines
  • Virginia Eubank: Automating Inequality

 

Ensuring equitable access to healthcare in the age of algorithms and AI

Yesterday, Dr. Peter Vaughan, chair of the board of directors of Canada Health Infoway, spoke at Longwoods’ Breakfast with the Chiefs.

After outlining the current state and future perspectives of digitization in healthcare, his main message was two-fold: 1. We are at risk of a “failure of imagination”, i.e. we cannot fathom all the possible futures that digital disruption might confront us with and hence fail to plan for their pitfalls adequately. 2. There is great potential for algorithms to be built in such a way as to solidify and deepen inequalities that currently exist in our system, and we need government oversight of such algorithms to prevent this from happening.

The first point is easy to understand, the second point may need little more explanation. Algorithms are used widely to determine what information is presented to us online, what choices are offered to us. We are all familiar with websites, offering us items we ‘might also like’, based on our past choices and based on what other purchasers have bought.

At a time when data from various sources can be linked to create sophisticated profiles of people, it would be easy for a healthcare organization to identify individuals that are potentially ‘high cost’ and to deny them service or to restrict access to services. Bias can creep into algorithms quickly. If people of a certain age, ethnic background or location are deemed to be ‘higher risk’ for some health issues or for unhealthy behaviours, and this is built into an algorithm that prioritizes ‘lower risk’ customers, then you are discriminated against if you share the same profile, no matter how you actually behave.

Discrimination is often systemic, unless a conscious effort is made to break the cycle of disadvantaged circumstances leading to failure to thrive leading to lower opportunity in the future. As Dr. Peter Vaughan pointed out, we in Canada value equitable access to healthcare, education and other public goods. We expect our government to put safeguards in place against discrimination based on background and circumstances. But how can this be done?

Private, for-profit enterprises have a right to segment their customers and offer different services to different tiers, based on their profitability or ‘life-time customer value’. Companies do this all the time, it is good business practice. But what about a private digital health service that accepts people with low risk profiles into their patient roster, but is unavailable to others, whose profile suggests they may need a lot of services down the line? Is this acceptable?

And if the government were to monitor and regulate algorithms related to the provision of public goods (such as healthcare) who has the right credentials to tackle this issue? People would be needed who understand data science – how algorithms are constructed and how AI feeds into them – and social sciences – to identify the assumptions underpinning the algorithms – and ethics. Since technology is moving very fast, we should have started training such people yesterday.

And how could algorithms be tested? Should this be part of some sort of an approval process? Can testing be done by individuals, relying on their expertise and judgement? Or could there be a more controlled way of assessing algorithms for their potential to disadvantage certain members of society? Or a potential for automation of this process?

I am thinking there may be an opportunity here to develop a standardized set of testing tools that algorithms could be subjected to. For example, one could create profiles that represent different groups in society and test-run them as fake applicants for this or that service.

Also, algorithms change all the time, so one would perhaps need to have a process of re-certification in place to ensure continued compliance with the rules.

And then, there would be the temptation for companies to game the system. So, if a standardized set of test cases were developed to test algorithms for social acceptability, companies may develop code to identify and ‘appease’ these test cases but continue discriminating against real applicants.

In any case, this could be an interesting and important new field for social scientists to go into. However, one must be willing to combines the ‘soft’ social sciences with ‘hard’ stats and IT skills and find the right learning venues to develop these skills.

Much food for thought. Thank you, Dr. Peter Vaughan!