AI for better?

Tom C W
4 min readOct 17, 2024

--

I’ve been fairly sarky about AI on the socials recently, and so before I lose my ‘tech club card’* I thought I’d write a little bit about how I actually use AI and how I think about AI For Better.

Ways I’m using AI or involved with AI

At Data For Action we are using AI as part of our quest to ask better questions and collaborate action around questions. Specifically we are using embeddings and vector searches for this. In simple terms we are using AI to help understand, at scale, how questions are related, both in the words they use and their context.

Our goal is to help humans to ask better questions and maybe support collaborative action in answering these questions.

For a short while (i wish it could have been longer, but life) I was involved in the Longitude Prize for Dementia, during which I got to see a whole range of organisations trying to come up with solutions to make life better for people living with dementia. The ability to do something different and to blend both the physical and AI world is really interesting and offers great potential.

Another way I’m exploring using AI is with something similar to GraphRAG. My specific exploration is related to trying to make better use of existing information or evidence that is not readily accessible or actionable because of format. We have and continue to spend large sums of money on producing more reports and evaluations when we know or suspect these already exist. Maybe we can do something about this, reducing the duplication, and supporting better knowledge.

Photo by Tara Winstead: https://www.pexels.com/photo/handwritten-text-on-paper-and-crumpled-notes-8406974/

Are we in the FastAI era?

So I’m not against AI. What I am against is bad use and definitely hype.

Very little good comes from hype. Things are often not as good, or as bad as the hype. But what hype does do is create a polarisation of opinions and views. It makes it hard to have nuanced conversations in a sensible way. In this post here I talked about the risk of tool driven approaches becoming ideologies, blindly followed or dismissed without discussion. And that’s all AI is, it’s a tool. But the hype creates or at least reinforces sides.

Hype often leads to consequences also, whether intended or not. The ability to generate** content at the click of a mouse has led to an explosion in content. Everywhere you look, more and more content. Generate a document, a picture, even a podcast, all in minutes. But did we really need more content? Didn’t we just need better content? Or just to use the content we already have, better?

For years we made more things, focused on quantity and minimum outlay. And we consumed.

Eventually we coined a phrase for it. Fast fashion. And we realised what a blight on the planet and people such an approach was. Maybe we are in the FastAI era.

There is talk that maybe the use of GenAI has led to UK foundations being overwhelmed with applications, exacerbating a precarious situation in UK philanthropy. Companies were quick to create and sell tools to generate funding applications FAST. And now foundations are having to respond. Paul Hamlyn Foundation for instance has set out guidelines for AI use in applications. There is talk of foundations using AI to detect, umm, AI.

A cycle of waste, when the promise is efficiency. Fast doesn’t equal efficient.

What do we need to consider when using FastAI?

Look AI isn’t going away. But there are things to consider when we are thinking about how we will or won’t use it. Let’s start with the environmental impact.

Google has increased its carbon output by 48% due in large part to AI and associated data needs. Microsoft is turning on nuclear power stations just for AI energy needs, water for data centers, it goes on. And some will say that current models will get more efficient, and they are right, but history has shown we won’t stop at current models. Much like any consumption driven market, there must be new models to drive the market, growth must be fed. What model number is your phone on? Is it really that much better than the one 4 numbers ago? Oh this one’s got AI…

What about bias? A year ago I wrote People As Code, as a way of exploring and maybe prompting a different way of thinking about our approach to AI. One of the core building blocks of GenAI is the training data for models. If we think about this data as part of the code of any tool, then consider what that code contains, or doesn’t? We spent years trying to ensure lived experience was fully valued(it still isn’t), and yet we are so quick to use tools that eliminate whole swathes of people, cultures, lived experience?

So no I’m not against AI. But I do think we need to be asking ourselves questions about how and why we use it. Friends of the Earth have been using AI in their innovation experiments, but are at least carefully considering their consequences and the trade offs of using AI.

So this is an ask to you, that when you are thinking about how and what you and your organisation will use AI for, that you consider some questions.

  • Does your use of AI make the lives of people you work with better?
  • Is this AI for better, or Fast AI?

*There isn’t a tech club card…or maybe there is and i’m not invited?

**I specifically use generate, not create.

Note — no AI was used to generate this post, except spell check.

--

--

Tom C W
Tom C W

Written by Tom C W

Do Good, Be Awesome. Thoughts on startups, social change, awesome things, and possibly running.

No responses yet