“I, Robot” in the Age of AI

Elle Cordova, one of the people whose content I sometimes watch on Instagram, recently started a book club to read “I, Robot” by science fiction writer Isaac Asimov. I’ve never been into science fiction, but since this might now have some relevance in the age of AI, and Asimov is a classic, I thought “why not?”

So, what can we learn from this 1950 book?

  1. I wish Asimov’s First Law of Robotics applied to AI development today. It reads: “A robot may not injure a human being or, through inaction, allow a human to come to harm.” The Hill, an American newspaper, recently reported that program “Thunderforge” will incorporate AI into U.S. Department of Defense military planning and operations via integration of AI agents into military workflows. (Source: https://thehill.com/policy/technology/5178088-pentagon-signs-ai-deal-to-aid-military-decision-making/)
  2. I wish Asimov’s Second Law of Robotics applied to AI development today. It reads: “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.” Some models are already at the point where they not only fail to do our bidding but also expend considerable effort to deceive us about their shortcomings and to avoid being shut down. The magazine Futurism, referring to a publication by the red-teaming organization Apollo Research, reported in 2024 that OpenAI’s “o1” large language model resisted evaluators’ efforts to try to shut it down through an oversight protocol. The model lied and gave evasive answers to evaluators’ questions, as well as tried to copy itself to overwrite a newer, more ‘obedient’ version (Source: https://futurism.com/the-byte/openai-o1-self-preservation) (Background Source: https://www.apolloresearch.ai/research/scheming-reasoning-evaluations)
  3. It looks like current AI development leads to machines following only half of the Third Law of Robotics. It reads: “A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.” Protecting its own existence while deceiving testers and disobeying orders would certainly not be a good skill for an AI to have.

The thing is, currently, there are no such laws. The “guardrails” that are so often talked about: Who is putting them in place? Who is adhering to them? Who is policing them? Who is even capable?

The fun in Asimov’s book is really in the thought experiments. Even if AI adhered to these three laws, what could go wrong? Human life is extremely complex from a moral perspective. Constrained by time, space and resources, we constantly make decisions about the lesser of two (or three or ten) evils.

For example, would it be ok for a robot to bring harm to some people, if that saves the lives of others? How should it decide which people to harm, and which people to save? Let alone the small problem that, even if we had perfect robots/AI that followed our instructions perfectly, the world is full of humans with bad intentions, eager to use technology for their own advancement to the detriment of others.

For a long time, I have been too fearful to engage with the topic of AI in a major way. AI-related knowledge and experience is now seeping into more and more job descriptions (see: https://betakit.com/opentext-makes-ai-number-one-priority-as-company-slashes-1600-jobs/). Governments are looking to AI to overhaul public services. It is time to wake up and face the music. How are you planning on becoming an AI-literate citizen?

#AI #IsaacAsimov #RogueRobots @ElleCordova

Future of Insights Summit (digest)

In some ways, the ingredients to conference success are similar to that for focus groups. Creature comforts are key. In this regard the 2023 Future of Insights Summit by CRIC / ESOMAR / CAIP delivered: I found the food was tasty, particularly the mushroom-stuffed tortellini.

The atmosphere at a conference can be many things, bustling, exciting, edgy, relaxed, calm, boring. While most people are probably there to make connections, I find it a bit embarrassing to witness attempts at buttering up potential clients and outshining other vendors at the table. Fortunately, there wasn’t too much of that and I felt the overall atmosphere was friendly and relaxed.

In terms of topics, I believe most people came to the conference to learn about AI. I heard several echo my own sentiment when the topic at last took centre stage, in the afternoon of day two: “finally”!

And what did we learn about AI? Steve Mossop played a scary “view into the future” reel suggesting that in a short amount of time we could all become slaves (or pets?) to “the machine”. He also mentioned that Leger is testing it, for all insights-related things – proposal writing and questionnaire design, coding, open-ended probing, analysis, charting – so far with limited impact on the business.

Frank Graves of EKOS presented data suggesting that a large proportion of Canadians are familiar with AI, with sharply increasing tendency. I wasn’t sure what to make of these data points. I think in this context the word “familiar” can mean many things. I doubt that it means actually understanding how AI works, and how machine learning is different from simply applying algorithms.

In terms of the actual, current uses of AI in the insights industry, two use cases were demonstrated on stage. One is the coding of open-ends. Using GPT4, Brenden Sommerhalder from MQO Research showed how AI comes up with a code list, and how you can improve upon the code list by running the AI over it multiple times. At the end, the result is pretty good, according to Brenden. Since code lists are a complex thing, we could not verify this on the spot. But MQO Research gave an access code for their GPT4 tool to all conference participants to try it out for free for the next few weeks. Pretty cool!

The other use case is having AI ask follow-up questions after an open-ended survey response. This is called “adaptive prompts” and is something that Kathy Cheng at Nexxt Intelligence offers to her clients. It is clear that by adding another prompt after an open-end, and one that is specifically tailored to your response, open-ended answers become much richer and survey engagement goes up.

This is certainly something to consider, since respondent engagement is one of the big issues in survey research – along with fraud, which was also discussed. Apparently, panel companies like Dynata and Sago are looking into using AI for improved fraud detection – ironically, while fraudsters may also be using AI to better disguise themselves.

So, I came away with the impression that many companies are looking into and experimenting with AI. There is a widespread expectation that it will transform our industry yet again. But it didn’t feel like the competitive struggle over who has the best AI application has yet erupted. AI is perhaps still too new, and too complex, and everyone is just feeling their way into it. There is palpable unease about its unknowability, and the new buzzword in our industry appears to be “guardrails”.  “Guardrails” means: We need to put something, anything, in place to make sure this fast-moving technology bus isn’t going to drive off the side of the cliff.

Alzheimer’s disease – not cracking the nut (yet)

As an anthropologist, market researcher and consultant working with pharmaceutical companies, I follow trends in the medical field. Neuroscience is one of the areas on the rise. Expecting this field to accelerate in terms of drug development, I compiled a comprehensive database of Canadian neurologists and started engaging this audience. Alzheimer’s disease specifically is an area of great unmet need where a massive market is ripe for innovations, pharmacological and otherwise. I recently surveyed neurologists to better understand their perspectives. You can read the results here: https://creativeresearchdesigns.com/examples-of-work

Neuroscience has received significant investment in the past decade.

In the U.S., President Obama launched the Brain Initiative in 2013 with a $100 million US dollar investment. He said: “Today, our scientists are mapping the human brain to unlock the answers to Alzheimer’s… Now is not the time to gut these job-creating investments in science and innovation. Now is the time to reach a level of research and development not seen since the height of the Space Race.”1

A similar investment was made in Canada with the establishment of the Brain Research Fund in 2011, a public-private partnership between Brain Canada and Health Canada. Its website describes the purpose of the investment as follows: “This visionary commitment by the federal government will ensure that Canada continues to be among the leaders in the global challenge to understand brain function and brain diseases.”2 Regional and local initiatives abound.3

However, while research funding is available, many worthwhile initiatives are competing for it.

Brain science is a diverse field; and neurologists often specialize in a particular area.

When I researched Canadian neurologists and potential participants for my Alzheimer’s survey, I noticed that only a subset of neurologists deals with Alzheimer’s and other dementias. Other significant specializations include multiple sclerosis, epilepsy, neuromuscular disorders, movement disorders (incl. Parkinson’s), migraine/ headache and stroke. The patient populations affected by these conditions are quite diverse, with epilepsy, MS, and migraine often affecting younger individuals and dementia, Parkinson’s and stroke primarily being diseases of the elderly.

While there are currently just over a thousand neurologists in Canada,4 only a subset of these would be relevant to a pharmaceutical company marketing a specific neurology drug.   

Many pharmaceutical companies have ventured into the area of neuroscience. According to the website Statistica, the top selling neurological drug products (2016 in the U.S.) are multiple sclerosis treatments, followed by medications for schizophrenia and bipolar disorder.5 Disease onset is typically younger for these conditions, impairment of day-to-day functioning can be significant, and patients remain on treatment for decades. Thus, they present interesting opportunities for pharmaceutical development.

Alzheimer’s disease is overdue for breakthrough discoveries.

But neurological diseases of the elderly have also received significant attention. Alzheimer’s disease, in particular, has proven to be a hard nut to crack. The last new treatment (Ebixa/memantine) came to Canada in 2005, joining the three cholinesterase inhibitors Aricept (donepezil), Exelon (rivastigmine) and Reminyl (galantamine). Neither of these agents are particularly effective in slowing down the progression of the disease. 

Many companies have tried to tackle the challenge of Alzheimer’s disease. Up until now, the path to a brighter future has been littered with unsuccessful clinical trials.6 Just this year, another five major flops joined the long list of previous attempts: Roche/AC Immune’s anti-tau antibody semorinemab, AbbVie/Voyager Therapeutics’ gene therapies, Sanofi/Denali’s RIPK1 inhibitor DNL747, and Eli Lilly’s and Roche’s amyloid antibodies solanezumab and gantenerumab. Biogen’s monoclonal antibody aducanumab is still in the running, but it faces an uphill battle to secure approval.

Will large pharma turn away, and leave the field to small, pre-clinical research companies for the time being? I believe there is reason for optimism about the future of brain research and the development of Alzheimer’s drugs specifically. Public attention and government incentives will continue to stimulate research. According to a recent article, 121 agents were in clinical trials for the treatment of Alzheimer’s disease at the beginning of 2020 (in ClinicalTrials.gov).7 Some are sponsored by large pharma and some by small start-ups. Even if beta-amyloid and tau protein turn out to be less promising drug targets than previously thought, with continued investment in this field, scientists are bound to come up with something eventually.

Non-pharmacological interventions deserve attention in Alzheimer’s.

In my survey among Canadian neurologists, one of the results that I found most interesting was their recognition of the importance of non-pharmacological measures.

Unprompted, Canadian neurologists identified supporting families, caregivers and home care as one of the top investment priorities to improve the lives of people with Alzheimer’s. When presented with a list of possible investment priorities, neurologists again picked ‘improved funding for home care’ along with ‘research into new drug treatments’ as the top two. Increased funding for long-term care facilities was perceived as less important.

Staying at home is certainly preferrable to living in long-term care, particularly under the current circumstances of the pandemic, and specific non-pharmacological interventions have the potential to significantly increase quality of life for Alzheimer’s patients.

One finding that intrigues and delights me is the brain’s storage of musical memories and the effect of music on the brain. It has been shown that memories of music persist while names of loved ones may already be fading into the background. The stimulation of those memories, and the participation in the making of music, can fill those affected by Alzheimer’s with great joy.

One of the projects supported by the Alzheimer’s Society of Canada is Voices in Motion, an intergenerational choir that brings together people with Alzheimer’s disease, caregivers and high school students.8 It combines the joyful act of singing with physical movement and social interaction. Based on findings from this study, researcher Dr. Debra Sheets, Associate Professor in the School of Nursing in Victoria, BC, will create a toolkit of best practices that can be used by other organizations who are interested in starting a community choir for people with dementia.

Early detection and screening are important, and AI may have a role to play in it.

For Alzheimer’s disease as well as for other areas of health, accurate and early diagnosis plays an important role in halting or slowing the disease before its effects become devastating. In fact, when my survey asked about top investment priorities, neurologists mentioned early detection and prevention unprompted. While there has been considerable research into biomarkers and brain imaging technology, more recently a number of companies have tried to harness the power of artificial intelligence for early detection of Alzheimer’s disease.

The Toronto startup Winterlight Labs uses AI to analyze speech patterns to detect cognitive impairment and other brain disorders.9 The company built a tablet-based assessment tool that is already being used in a clinical trial for an Alzheimer’s drug. Tech industry giant IBM, in conjunction with Pfizer, is working on something similar that presumably can predict onset of Alzheimer’s with 71% accuracy.10 Researchers at the Boston School of Medicine have developed a computer algorithm based on AI that can accurately predict the risk for and diagnose Alzheimer’s disease using a combination of MRI, testing to measure cognitive impairment, and data on age and gender.11

In Conclusion

Brain health is responsive to a variety of behavioural, technological and pharmacological interventions. Hopefully, research investments in Alzheimer’s will pay off in the not-so-distant future, and lead to breakthroughs that significantly improve the trajectory of this disease and the quality of life of those affected. I am keeping my fingers crossed.

1: https://obamawhitehouse.archives.gov/BRAIN; https://obamawhitehouse.archives.gov/the-press-office/2013/04/02/fact-sheet-brain-initiative

2: https://braincanada.ca/canada-brain-research-fund/

3: https://blog.braininstitute.ca/mental-health-is-brain-health-a-paradigm-shift/; http://www.camh.ca/en/driving-change/about-camh; https://www.uhn.ca/KNC; https://www.baycrest.org/; https://blog.mtl.org/en/montreal-leads-way-neuroscience-breakthroughs; https://www.mcgill.ca/neuro/

4: https://www.cma.ca/sites/default/files/pdf/Physician%20Data/01-physicians-by-specialty-province-e.pdf

5: https://www.statista.com/statistics/318259/revenue-of-top-20-neurology-products-in-the-us/

6: List of recently failed clinical trials in Alzheimer’s disease

Roche/AC Immune: https://www.fiercebiotech.com/biotech/roche-ac-immune-s-tau-blocking-drug-flops-alzheimer-s-as-biotech-s-shares-halved

AbbVie/Voyager Therapeutics: https://www.fiercebiotech.com/biotech/abbvie-cans-voyager-alzheimer-s-parkinson-s-gene-therapy-pacts

Sanofi/Denali: https://www.biospace.com/article/denali-and-sanofi-pause-alzheimer-s-trial-and-pivot-to-another-drug/

Eli Lilly and Roche: https://www.fiercebiotech.com/biotech/lilly-and-roche-s-antibodies-fail-late-phase-alzheimer-s-test

7: https://alz-journals.onlinelibrary.wiley.com/doi/10.1002/trc2.12050

8: http://alzheimersocietyblog.ca/meet-researchers-debra-sheets-university-of-victoria/

9: https://winterlightlabs.com/

10: https://venturebeat.com/2020/10/22/ibm-and-pfizer-claim-ai-can-predict-alzheimers-onset-with-71-accuracy/

11: https://neurosciencenews.com/ai-alzheimers-16326/

Post Disruption: Market research, what next?

Based on my personal experiences

Around 2011 the owner of the company that I then worked for came to Toronto and challenged the employees to go on social media. At that point, few had considered how important this was about to become for our jobs.

In the pre-historic days of market research, consumer data were collected by walking around the city, knocking on doors and interviewing people face to face. I caught the tail end of this when I started in market research in Russia (which is another story). Then, call centres became industry standard and most survey research was conducted on the phone. However, qualitative discussions were still held in person at focus group facilities.

In the 1990ies, online surveys took over as the main means of eliciting consumer data. Rather than finding consumers through random digit dialing, companies now started to build online panels of people who were willing to answer surveys. There was some concern about the representativeness of these panels at first, but it was soon put to rest when the considerable cost savings of this approach became apparent. Initially, research companies needed specific expertise, expensive software and sufficient server capacity to program, host and run these online panels and surveys. Quite a few even tried to develop their own software.

The 2000s saw a rise in self-serve online research solutions. The idea of software as a service (SaaS), and ‘freemium’ products such as SurveyMonkey made it possible for anyone to collect the data they wanted. It seemed no longer necessary to have any particular expertise or resources. At that time, I felt that many established market researchers were reluctant to open their eyes to the new reality. A few companies were blazing ahead, but the majority sat back and hoped it wouldn’t get too bad.

I realized myself around 2012 that things would never be the same again. When I had the opportunity to start my own business in 2014, I jumped with both feet into the world of SaaS, social media and startups, looking for ways to innovate and carve out a niche for myself in the insights community. I read Clayton Christensen and Eric Ries. I experimented with different tools and methodologies. I immersed myself into WeAreWearables Toronto, then a monthly event at the MaRS Discovery District, an innovation hub that offers services for startups and scaleups, and various other incubators and accelerators around the city.

Innovation was the buzzword, and also agile, real-time, lean and design thinking. The word disruption became popular in my neck of the woods a little later. Hard to believe that this was only six years ago. Now the word already seems a little tired. I feel that the dust of disruption has settled in market research, and firms have made adjustments.

Those who started off with the new technologies – for example Qualtrics, or the Canadian company Vision Critical – have probably done very well (not that I know their books). The traditional firms have incorporated various technologies into their offering. Many have social media analysis products. Some run large-scale customer feedback platforms for their clients. Some use virtual reality goggles for concept testing or shopper studies. Some offer online communities or hold virtual discussion groups. And to cut cost, many move business functions into the cloud, and outsourcing is widely used.

Whether this is sufficient to keep traditional market research organizations profitable, I don’t know. What I have learned in my own business is that I am still selling ATUs, segmentations, concept tests and one-on-one interviews. After the frenzy to be innovative and different, what clients seem to appreciate is my expertise. They trust me to know how market research is done properly, to execute the project for them, and to deliver them results that reflect a thorough understanding of their business questions, and in a form that adheres to their internal processes and requirements.

That’s why I stopped following the hype in the last couple of years. Disruption happened, but that was in the past. No amount of research automation can substitute understanding. And understand the customer we must.

But…I recently started renting a co-working desk at a business centre. I am with ease twenty years older than the majority of the other tenants there. So now I am back in a startup environment, and I see that while the hype has decreased, startups are here to stay for the foreseeable future. Businesses who are trying to disrupt various fields are numerous and, with the myriad applications of AI, far from done.

It will remain important to understand how new technologies fit into and change existing businesses. For example, I am intrigued by the anatomy of the Cloud, where servers are located, how data are moved around and what implications different factors have on data loading speeds, data security etc. When some people that I recruit for surveys complain about it taking longer than they thought, is that because their Internet is slow, or because the survey hosting company has switched from using their own servers to the cloud?

Data governance also interests me. With many market research companies using subcontractors, it is almost impossible to see all the way down the supply chain where data may be stored, processed or transferred to. The business risk related to data governance has increased exponentially for market research firms. Good times for lawyers and insurance companies.

But understanding AI, Cloud and computing in general is and will be immensely important for anyone interested in the affairs of this world. I think this is a topic that they should teach kids about at school, so that future citizens can make informed decisions about it. I want to learn more. So far, I have read three books on AI – it’s a start.

Barbara’s AI reading list:

  • Kartik Hosanagar: A Human’s Guide to Machine Intelligence
  • Ajay Agrawal, Joshua Gans, and Avi Goldfarb: Prediction Machines
  • Virginia Eubank: Automating Inequality

Update, also… Janelle Shane: You look like a thing and I love you

Ensuring equitable access to healthcare in the age of algorithms and AI

Yesterday, Dr. Peter Vaughan, chair of the board of directors of Canada Health Infoway, spoke at Longwoods’ Breakfast with the Chiefs.

After outlining the current state and future perspectives of digitization in healthcare, his main message was two-fold: 1. We are at risk of a “failure of imagination”, i.e. we cannot fathom all the possible futures that digital disruption might confront us with and hence fail to plan for their pitfalls adequately. 2. There is great potential for algorithms to be built in such a way as to solidify and deepen inequalities that currently exist in our system, and we need government oversight of such algorithms to prevent this from happening.

The first point is easy to understand, the second point may need little more explanation. Algorithms are used widely to determine what information is presented to us online, what choices are offered to us. We are all familiar with websites, offering us items we ‘might also like’, based on our past choices and based on what other purchasers have bought.

At a time when data from various sources can be linked to create sophisticated profiles of people, it would be easy for a healthcare organization to identify individuals that are potentially ‘high cost’ and to deny them service or to restrict access to services. Bias can creep into algorithms quickly. If people of a certain age, ethnic background or location are deemed to be ‘higher risk’ for some health issues or for unhealthy behaviours, and this is built into an algorithm that prioritizes ‘lower risk’ customers, then you are discriminated against if you share the same profile, no matter how you actually behave.

Discrimination is often systemic, unless a conscious effort is made to break the cycle of disadvantaged circumstances leading to failure to thrive leading to lower opportunity in the future. As Dr. Peter Vaughan pointed out, we in Canada value equitable access to healthcare, education and other public goods. We expect our government to put safeguards in place against discrimination based on background and circumstances. But how can this be done?

Private, for-profit enterprises have a right to segment their customers and offer different services to different tiers, based on their profitability or ‘life-time customer value’. Companies do this all the time, it is good business practice. But what about a private digital health service that accepts people with low risk profiles into their patient roster, but is unavailable to others, whose profile suggests they may need a lot of services down the line? Is this acceptable?

And if the government were to monitor and regulate algorithms related to the provision of public goods (such as healthcare) who has the right credentials to tackle this issue? People would be needed who understand data science – how algorithms are constructed and how AI feeds into them – and social sciences – to identify the assumptions underpinning the algorithms – and ethics. Since technology is moving very fast, we should have started training such people yesterday.

And how could algorithms be tested? Should this be part of some sort of an approval process? Can testing be done by individuals, relying on their expertise and judgement? Or could there be a more controlled way of assessing algorithms for their potential to disadvantage certain members of society? Or a potential for automation of this process?

I am thinking there may be an opportunity here to develop a standardized set of testing tools that algorithms could be subjected to. For example, one could create profiles that represent different groups in society and test-run them as fake applicants for this or that service.

Also, algorithms change all the time, so one would perhaps need to have a process of re-certification in place to ensure continued compliance with the rules.

And then, there would be the temptation for companies to game the system. So, if a standardized set of test cases were developed to test algorithms for social acceptability, companies may develop code to identify and ‘appease’ these test cases but continue discriminating against real applicants.

In any case, this could be an interesting and important new field for social scientists to go into. However, one must be willing to combines the ‘soft’ social sciences with ‘hard’ stats and IT skills and find the right learning venues to develop these skills.

Much food for thought. Thank you, Dr. Peter Vaughan!