Artificial intelligence is more than creating humanised robots

As read about in science fiction novel and indicated through the development of the world’s first humanoid robot, humans have always pushed the boundaries of technology despite controversial ethics and public trust. Whilst machines have been shown to be advantageous in performing time-consuming, laborious or demanding physical tasks, the boundary between a digital world and human biology has only become more blurred since the development of artificial intelligence, or AI systems and it may be unsurprising that the majority of us find it hard to keep up.

As a scientist, it would be naïve to foster a career whereby I avoid interacting with AI, however like many others I continue to question where we, as a species, should be drawing the line between robots and humans. One particular example, for myself, is the principle of self-driving cars. Whilst this reality remains highly novel, or new, it highlights a fundamental consideration. Where should we be setting the boundaries for development of AI systems and to what extent will minimising human ‘error’ impact our daily life and skill sets?

This question is not one that I can realistically address in a blog article – but my hope is that AI will not replace humans, but rather compliment our work. Nevertheless, as we continue to test the boundaries of our current world, it’s important that key aspects of artificial intelligence become more widely understood including what exactly is considered AI and how may it influence our society. As a cancer researcher, I have come to learn about the principles of AI from within the field of medical research and therefore this is the lens through which I’ll discuss its benefits to today’s society. As medical technology advances rapidly, particularly in the field of genomics, the utility of AI will undoubtedly continue to expand (exponentially) however my main concern is that science education is yet to adapt to how fast technology is progressing.

A brief history of AI (written by a biologist)

In 1935, an English mathematician, Alan Turing, proposed the idea of mathematical algorithm for performing multiple stepwise analysis. His hope was that one day, scientists would use this work to build machines that could learn and reason. During the second world war, Turing led a group of scientists in designing a machine that could decipher incoming code – a process known as cryptanalysis­ – which was used as a form of communication by the German navy, created by Enigma (Read more here). However, the term ‘Artificial Intelligence’ was not coined until 1955, by John McCarthy.

FILM: The Imitation Game, 2014 (Review, The Guardian)

Image Source: History of Artificial Intelligence, University of Queensland, Australia

Today, experts refer to two distinct ‘classes’ of AI; weak AI and strong AI.

One example of weak AI is Apple’s Siri (2011) which is capable of translating text/speech technology and is defined as capable of performing only one simple pathway (ie. voice-to-text-to-search). Whilst many of us may not have recognised such technologies as AI, it has already entered daily life. Nevertheless, we are more tuned to listen out for new developments in strong AI technology.

Strong AI incorporates systems that are capable of generalisation, reasoning, problem solving, perception, relationships and situational judgement. The million-dollar question is; is creating a robotic human possible? Some experts believe it will be by 2050. However, you may be wondering how these capabilities may be utilised in medical research. That’s a good question, and the answer certainly isn’t always intuitive.

Artificial Intelligence enters healthcare

AI entered the medical field in 1972, with the development of a system known as MYCIN which scientists at Stanford University hoped would be able to diagnose – and improve treatment of – blood infections using formula and rules from a database of understanding (Read more here).

Last year, BBC reported that NHS England were to set up a National Artificial Intelligence Lab (NAIL) after Health Secretary, Matt Hancock announced £250 million to boost the use of AI in the UK’s healthcare system [ref.] An annual report by NHSx, the team of experts who are expected to facilitate NAIL, explains that AI could help facilitate a more personalised healthcare approach, improve image recognition (as in digital pathology) and improve hospital management systems whilst also highlighting the need for continuous, efficient re-evaluation when using such modern technology.

That sounds really exciting Holly, but where does AI come into medical research?

A related concept that I have recently come across in my area of research (cancer) is ‘machine learning’, which is often considered a synonym, or anomalous to AI. However, it’s important to note that not all AI can “learn and improve from experience without being explicitly programmed” [1]. More specifically, machine learning refers to a system that becomes smarter over time and is less susceptible to information overload or distraction (unlike us humans!) However, it’s important to consider that the relative output data of such systems is only ever as good as the training examples, the input data, that the machine is given. Here lies the need for an active, human researcher.

Another phrase that is frequently used by experts is ‘deep learning’, which uses complex neural networks to decipher patterns in data. In 2019, Geoffrey Hinton, the godfather of deep learning, was awarded the Turing Prize for the development of neural networks and distinguished the role of neural networks by their desire to learn rather than being “logic-inspired” [2]. In an interview with Nick Thompson (Wired), Hinton explains that neural networks can be thought of as information pathways based on our understanding of the human brain. Hinton continues to highlight that “the bigger you scale (the input) up, the better it works (unsupervised, or machine-only)”, allowing a lesser need for human classification.

Recent studies highlight the potential for deep learning to “unlock the benefits of precision medicine”, predict drug response and/or adverse drug reactions. One real benefit of AI is its ability to analyse data in real-time, similarly to the ‘waze’ app – an application which continuously searches for a quicker transportation route whilst you’re on the go. However, one major concern in using AI remains to be a mistrust in its security, insufficient transparency, or concerns over data bias – particularly in circumstances where AI is learning from past human decisions with inherent error, ie. recruitment. Unlike current AI, humans provide intuition, context and critical judgement which could arguably minimise machine bias (this is not to suggest humans do not make mistakes).

Read more about the potential for AI in Clinical Trials here.

By automating the repetitive tasks in a work environment, AI allows more time for interpersonal responsibilities which machines are incapable of doing (e.g. creativity, improvisation, dexterity, judging, social & leadership skills, etc). Some argue that AI can provide ‘insight’ without the need for setting rules. Other tasks where AI may become advantageous include routine safety checks – which will provide time for researchers to complete exciting hands-on experiments and allow us to focus on presentation of data and (hopefully) encouraging more funding etc. (Credit to a short course entitled “Digital Skills: Artificial Intelligence by Accenture” on FutureLearn).

Current AI and machine learnings systems have the capacity to provide more accurate, reproducible data in a fraction of the time however it is essential that these skills are woven into science education. Since endeavouring to understand what is actually meant by AI, I’ve read mixed opinions of ‘the fourth revolution’ with some fears being heightened by “anxiety-inducing headlines” – accurate and dramatized – and others remaining “cautiously optimistic” towards the benefits of AI in healthcare. It’s important that we consider artificial intelligence as more than creating humanised robots, but it’s equally important not to become over reliant on algorithms. In order for healthcare to benefit from AI, it must be sufficiently integrated and trusted beyond the realm of the experts. With that said, I’d better get back to learning how to code.


Could artificial intelligence make doctors obsolete? | BMJ Debate, 2018


Meet Sophia, World’s First AI Humanoid Robot | Tony Robbins

The Turing Test: Can a computer pass for a human? | Alex Gendler, TED-Ed

Robots in 2050 and Beyond | The Inaugural Lecture of Lord Rees, Science Museum

An AI Pioneer Explains the Evolution of Neural Networks | Nick Thompson in conversation with Geoffrey Hinton, Wired

FICTIONAL READING: Machines like Me – Ian McEwan


Related articles:

  1. Scientists develop AI that can turn brain activity into text Nicola Davis, The Guardian
  2. AI can search satellite data to find plastic floating in the sea Donna Lu, New Scientist
  3. Building ethical and responsible AI systems does not start with technology teams Swathi Young, Forbes

Featured Image: BMJ, 2018


Published by Holly Leslie

Full-time Cancer Researcher + Freelance Science Writer | MRes, BSc | Since discovering my passion for science writing during my final year of undergraduate study, I've written articles for University newspapers, The Gaudie and Redbrick and two Science magazines, Wonk! and the Glasgow Insight to Science and Technology (GIST)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: