- AI
- Ethics
Stephanie Antonian - do we believe in AI like a religion
AI - and Artificial General Intelligence - both seem like they are all about Science and Technology. Yet, the way we talk about them, and the narrative in the media is so much more about belief than science. In this episode, we talk with CEO of Aestora, Stephanie Antonian, about what she's seen in her experiences of AI since working on ethics for Google - and we discuss what the future might hold, and what role charity could play.
Listen to this podcast on Spotify

Stuart McSkimming
Podcast host

Do we believe in AI like religion?
In today’s episode, we take a step back and have a think about where AI is heading, and whether we will reach AGI (Artificial General Intelligence). We talk with Stephanie Antonian who has been working on AI and ethics since before it became mainstream. We ask who is AI serving right now? And what can we expect in the future?
But before that, I’ve been considering a little bit about where we are now.
When prompted to adopt a humanlike persona, GPT-4.5 was judged to be the human 73% of the time: significantly more often than interrogators selected the real human participant That’s a quote from a paper published earlier this year that argued that Chat GPT 4.5 had passed the Turing test. Quite a moment - which, six months on, almost seems to have passed us by.
Does that mean that machines are now as smart, or smarter than humans? Not quite.
In creating artificial intelligence, the current breed of LLM’s are just a single component of ‘intelligence’.
If the challenge is, can we replicate and surpass what the human brain can do, then there are multiple ‘components’ we need to master. Firstly, a space to store and retrieve lots of knowledge. Tim Berners-Lee invented that one for us.
Next up, fill that space with knowledge and pictures of cats. Tick.
Then ChatGPT came along as a method of taking external queries, and using the internet as a knowledge-bank, to generate the most likely answer that most people would find most convincing… which in many ways is similar or surpasses what some humans can do.
A third component (which I think hasn’t really got enough good press) has been that quietly over the past 40 years, technology systems have not only got quite good at speech, image and video generation but also, can now encode and match these to similar samples. [I say this, as a coder, who in his teens spent a lot of time, out of boredom, writing code on my Amstrad CPC 6128 to try and recognise patterns in images. Whilst on memory lane, I also have happy memories of the Ghostbusters game from Activision where there was a barely recognisable sample of someone saying the word ‘Ghostbusters’ at the start of the game. Things have progressed quite a bit since then.]
Anyway, back to 2025. Where are we now in our quest to match or beat the human brain? We’ve got Knowledge, Senses, and the ability to respond and interact based on the previous two. Fantastic, we’ve now got enough to pass the Turing test.
But, as mentioned, this isn’t ‘intelligence’ What’s missing? There’s a lot of talk about AGI - Artificial General Intelligence. A very quick explainer on AGI - we’re talking about a machine that is capable of understanding, learning, and applying intelligence across a broad range of tasks. and it could learn and adapt without needing to be fine-tuned for every task.
Many people ‘believe’ we’ll create AGI in the next year or, two. Some perhaps longer - and some think it isn’t possible. At some points in the last few weeks, there have even been rumours that OpenAI have created it internally. To be clear, whilst GPT-5 (now incorporated into Co-Pilot for those of you that have licenses) is a big step forward, it definitely isn’t AGI.
Which takes me full circle back to my original question. Can we surpass the human brain? Whilst, as I’ve argued above, we are creating intelligence, the ability to sense, memorise and give reasonable responses, there are still obvious things that an artificial brain won’t have - such as thoughts, emotions, and nostalgic happy memories - ie the things that give us our personality, and make us living creatures.
AI definitely can’t do these at the moment, and these also don’t fit within the definition of AGI.
Going back to the Turing test. Alan Turing was a mathematician and computer scientist not a neuroscientist. His test was never about human-like intelligence - it was about an invented concept - artificial intelligence. Perhaps where some of the media confusion lies, is in a muddle of confusing the two - which ultimately, won’t ever be the same thing.
Which leads me nicely to today’s podcast…
Season 3, Episode 6 - Do we believe in AI like religion?
I had a really good episode around AI and ethics in Season 1- Episode 5 - with Karin Woodley. In it we discussed some practical challenges around ethics and bias in AI. But, things move fast in the world of AI, and that was eighteen months ago… so it seemed like a topic worth revisiting.
In today’s episode, I talk with Stephanie Antonian, who has quite a CV from being an ethics advisor to DeepMind, through a spell with X, the Moonshot Factory, then on to founding her own AI company.
We dig a bit deeper than the normal conversations around how charities are developing AI plans, and instead look into what is happening with AI and the big tech firms - and where we might end up. We consider the challenges of how AI makes it harder to differentiate fact from fiction, and the obvious peril that this presents. We also talk about wealth consolidation and where this might lead us. We consider where charity fits into all of this, and what role it can take in shaping how humanity reacts in a world of AI.
As ever, views expressed are the views of the individuals concerned, and don’t necessarily represent those of any organisations they are, or have been associated with.
Stay Tuned for more insights in future episodes
If you’re new to the podcast - then definitely take a look back at some cracking episodes in the previous two seasons. Whilst there is a storyline flowing through the episodes, they work in any order, so don’t feel obliged to listen to them chronologically.
If you like this newsletter and podcast, and want it to stay free, please do consider reposting it on LinkedIn - we love watching our audience grow. Also do get in touch if you want to get involved, or sponsor anything.
If you’re keen for me to feature something going on in your charity, please get in touch - and do comment on anything in here, or in the podcast that you like.
Remember – follow on your favourite podcast platform, today, so that you don’t miss an episode.
Spotify: https://open.spotify.com/show/0G3vGA98kpk6biZqQVsSVL?si=005d3205859d4668
Apple Podcasts: https://podcasts.apple.com/us/podcast/virtue-virtuosity/id1742994475
Podbean: https://www.podbean.com/podcast-detail/ynvth-2fcfed/Virtue—Virtuosity-Podcast
YouTube: https://www.youtube.com/@VirtueVirtuosity
Castbox: https://castbox.fm/channel/Virtue-%26-Virtuosity-id6126681
RSS feed: https://feeds.alitu.com/63115191
(please ask if you think there is somewhere I else I should be publishing, and I’ll endeavour to add it!)
About our host and guests
Stephanie Antonian
Stephanie is the CEO and founder of Aestora, the AI research lab behind the Digital Health Score.
Previously, Stephanie worked with Google X, DeepMind, NASA, Harvard Innovation Lab, and Accenture on social justice initiatives. Her roles have been multi-functional covering Business Strategy, Public Policy, Data Science, and Product Management.
Stephanie was awarded the UK’s Frank Knox Fellowship to complete her MPP from Harvard Kennedy School focused on data privacy, and was a researcher at Harvard Law School with the ‘re-coding the law’ project. She holds an MA from St Andrews in Biblical Studies.
Stuart McSkimming, Podcast Host
Stuart is an independent consultant and founder of the sector-specific Technology consultancy, Virtue Chain . He is an award-winning leader with over twenty years’ experience in NFP/Charity leadership roles, predominantly in the technology/digital and transformation space. He is an expert in getting the most from teams and focusing organisations on strategic goals to get the most from Technology & Digital. He is passionate about organisations focusing on inclusion and finding ways to attract a diverse mix of top talent into their teams. He has worked as a CIO for two organisations – Shelter and Royal British Legion, and also a variety of roles elsewhere. Stuart is extensively networked in the not-for-profit sector both in the UK, and internationally, and is the Vice Chair of top membership organisation, Charity IT Leaders. Stuart enjoys regular public speaking, and also has been known to do stand-up comedy gigs occasionally.
Virtue Chain builds the link between enthusiastic and talented technology teams, and organisation strategic goals. By focusing on people, strategy, governance and decision-making structures, Virtue Chain can help your charity get the right leadership approach across Technology, Digital, Data, AI, and transformation. We use maturity models, and partnering approaches to help trustees, CEO’s and exec leaders see the potential for technology in their organisation, and understand where to start in turning ideas into action.
Get in contact with [email protected] if you’d like to chat. Typically the conversation starts from either a trustee, CEO or CTO level.
In case its not obvious, views expressed are Stuart’s own view, and don’t represent those of any organisations he is working with or mentions.
Why not get in touch for an informal chat about how we can help?
There are a wide range of resources and tools we can use to help you to develop organisational capability in Technology, Digital and AI. Schedule a chat to discover how we can help.
