AI Safety Summit: Beyond Tree-Climbing Humanoids and into the Real World

It was around the time that Elon Musk warned Rishi Sunak about robots chasing humans up trees that I realised the AI Safety Summit might not offer the kind of deep insight that many had anticipated.

I don’t want to diminish the dangers of tree-climbing humanoids. I'm not really a climber so that would be bad, but this vivid imagery is symbolic of an increasingly common mixing of existential future gazing with practical discussions about enforceable AI regulation.

There are a lot of important points that still need to be ironed out. How can we prevent the spread of disinformation on social media platforms like Facebook while still nurturing the innovative spirit of start-ups, for example, but these efforts are often overshadowed by more sensational discussions about the potential misuse of AI that look remote in the near term. 

The European Union has led on regulation so far and is concentrating its AI oversight on data privacy, surveillance, and the potential impact on human rights. It appears particularly keen on combating disinformation, and has pressured tech giants like Google, Facebook, and TikTok to label AI generated content - a strategy that seems to be yielding results. As election campaigns in the UK and US heat up, the issue of disinformation is set to become even more prominent.

This will all take time to iron out, but the regulatory net wouldn’t be tightening if there weren’t real problems with accuracy and privacy already. That software you use at work that increasingly leverages AI – do you really know what it’s doing with your data?

At Fifth Dimension AI we've taken these issues seriously from the outset, which is why we’ve built a system that will flourish as the use of AI becomes more widespread, so let me share three ways in which we are staying ahead of the game:

1. Privacy

Businesses using general-purpose AI like chat GPT often face a privacy dilemma. If you're not paying for enterprise and you're not using your API, Open AI offers no guarantees about data usage. In fact, users have reported that Open AI’s new custom GPTs may leak private contextual data if a user simply asks for it. At Fifth Dimension AI, we have transparent Ts and Cs and real people you can talk to.

2. Guardrails

Our code checks what you've asked the AI to do. If we think the AI is going to hallucinate or lie, it won't service the request. Instead, we'll explain why and ask for more information. We're proactively ensuring our AI doesn't hallucinate, according to our end users' risk tolerances.

3. Human in the Loop

We place great importance on the human in the loop and fact-checking. We're in the business of enhancing human decision-making, not advocating for an AI agent to take unsupervised actions on behalf of the user. We want to help people guarantee the accuracy of their work and reason better, not amplify the misinformation they can produce.

A lot of this comes down to our view of the future - our belief is that human-in-the-loop, AI-driven software is the future, where human productivity is enhanced, not a dystopian world rife with misinformation and job loss.

We think start-ups like ours are in a unique position to lead this charge. By focusing on specific use cases and industries, we can exert greater control over our products. We understand what our users are trying to do. This is a far cry from a product that promises to do everything but leaves you wondering about the controls in place.

So, if you want a taste of safe, effective AI of the kind regulators are attempting to shape, check out this demo . It's easier than clearing up data leaks from the tools you already use, or preparing for the day our tree-climbing overlords come knocking. You can be sure of that. 

Previous
Previous

Fifth Dimension AI Recognised at EISA Awards 2024 🎉 🏆

Next
Next

"AI is coming to take your Job" Or is it?