Posts in: AI

🌟 Rethinking AI: From Misconceptions to Clarity 🌟

The term ‘artificial intelligence’ (AI) still causes a lot of confusion, so we should rebrand to ‘machine intelligence’.

What we call AI doesn’t “understand” in the same way as humans do. It recognises and creates patterns based on probabilistic models. It’s important to understand this difference because it helps us have realistic expectations and make the most of machine intelligence.

Even though this might seem a bit hair-splitting, let’s embrace the nuances and educate others about the true nature of AI. If we do this, we can help to build a more informed and innovative community, driving forward the impressive advances that machine intelligence offers.

If we can cut through the noise and get to the heart of what AI is really about, it’s much easier to work in a meaningful way. And this clarity doesn’t ignore the challenges, such as high resource consumption and potential negative impacts, e.g. on the workforce. But it helps us to talk more clearly and sensibly about what we can use it for and what problems we might run into.

#MachineIntelligence #Innovation #FutureOfWork

A vibrant digital art piece depicts swirling waves of interconnected particles and light, creating a dynamic flow of energy.

While the large and resource-intensive #LargeLanguageModels #LLMs are grabbing headlines, it's likely that 'tiny AI' #tinyAI is already subtly enhancing your daily life.

Thats what I posted almost one year ago.

🌍 Working in innovation across low-resource environments globally, I constantly see the impact that access to cutting-edge technology has in creating opportunities. One technology stirring both excitement and challenge is AI 🤖. In this context, the direction I believe holds an underrated promise for responsible real-world impact is open source, light models, and small/edge/embedded solutions.

Why this? In resource-constrained environments three key factors affect the practical and responsible use of AI: hardware requirements and model size, which are closely related. Large AI models and high-demand systems require significant resources, whereas small, edge or embedded devices offer a more efficient alternative. Lightweight models offer accessible, efficient solutions, which is particularly important where infrastructure is a limiting factor. Open source is a concept that has yet to be fully defined in AI, but is essential for the responsible use of the technology.

EdgeAI - AI performed directly on devices - is an interesting opportunity. Here’s why:

🔹 Data Sovereignty: By keeping data local, we protect sensitive information and align with data sovereignty principles. 🔹 Energy Efficiency: With reduced reliance on cloud computing, we can cut down on energy costs and make solutions more sustainable 🌱. 🔹 Low Latency: Faster data processing leads to more responsive and effective solutions—crucial in many critical applications.

Therefore, the focus of AI use case development in resource-constrained environments must also be on small applications. This is no less of a technological challenge! Building a future with Edge AI means integrating smart sensors, advanced processors, and optimised data management, supported by smart devices at the edge. Again, also the potential of this flavour of AI is transformative!

See examples for possible applications in this EU supported program: https://edge-ai-tech.eu/

💡 #AI #Innovation #EdgeAI #OpenSource #SustainableTech #TechForGood

A small robot greets a much larger robot with a screen displaying code, set against a city street backdrop.

German IT magazine #ct tested the usual suspect LLMs, focusing on how they handle German language content and their energy footprint. The results speak for themselves. The French LLM #Mistral delivered impressive results, almost matching the quality of the market leader chatGPT while consuming far less energy. You can download this model and run it on-premises with just four #H100 cards (although they still cost 30,000 EUR each). A comparable Meta #LLama model requires four times this number. The more cards you use, the higher your energy consumption, which leads to a cost difference of 5.40 EUR per 1 mio token for Mistral, to 34.40 EUR for LLama (!) just to cover the electricity bill (and a lower quality than Mistral, at least in German). Read the complete article here (German, paywall).

Having a super-smart LLM (Large Language Model) ready for a new task, however, requires some specific knowledge to be included. So instead of a complete overhaul of the model, you just tweak it with …