Posts in: AI

In today’s fast-moving landscape, where technologies and customer demands evolve at lightning speed ⚑, traditional businesses face a critical challenge: keeping up. πŸ“‰ Their structures - designed for stability and efficiency - often make it difficult to pivot quickly enough.

A solution? Corporate Venturing. 🌟

By partnering with startups, launching internal ventures, or establishing accelerators and incubators, companies unlock a powerful edge: βœ… Access to cutting-edge technologies that reshape industries βœ… New business models to tap into emerging markets βœ… Speed and agility to respond to change faster than competitors βœ… A boost to entrepreneurial culture within the organization

Corporate venturing allows companies to combine their resources, expertise, and networks with the bold innovation of startupsβ€”a winning formula for long-term growth and resilience. 🌱

Of course, every opportunity comes with things to manage: πŸ‘‰ Aligning corporate processes with startup agility πŸ‘‰ Balancing risk with strategic goals πŸ‘‰ Integrating new ideas into the core business

πŸ’‘So will the future belong to those who combine the stability of established companies with the creativity and speed of startups?

Have you seen this approach drive success? Let’s hear your insights! πŸ’¬βœ¨

🌟 #OpenSource #LLM Teuken-7B has been officially launched and is available for download πŸŽ‰ This milestone is part of the OpenGPT-X initiative, dedicated to the creation of large AI language models “Made in Germany” πŸ‡©πŸ‡ͺ, designed for both business and research needs. Even with limited resources - just 18% of the computing power used to train models like Meta’s Llama3 8B - Teuken-7B comes up with some interesting features 🎯

Here’s what makes Teuken-7B stand out:

πŸ”‘ Key Features: β€’ Multilingual & Open Source: Built to support all 24 official EU languages, emphasizing European linguistic diversity. 🌍 β€’ Trustworthy & Versatile: Tailored to Europe’s wide range of cultures and businesses, with a strong focus on openness and community collaboration. 🀝

βš™οΈ Technical Innovations: β€’ Custom Multilingual Tokenizer: Specifically optimized for European languages for enhanced efficiency and performance. Might help also to include other languages into LLMs. β€’ Efficient training: Developed using just 18% of the computing power required for models such as Meta’s Llama3 8B, and less than 1% for larger models such as Llama3 405B - making this an interesting approach for resource-constrained environments and lower energy consumption for training. πŸŒπŸ’‘

πŸ“š Training Data: β€’ Over 50% non-English content, with training content also in languages like Maltese. β€’ Own benchmarks for multilingualism and achieving comparable quality of output in all supported languages.

With Teuken-7B, it seems that Europe is showing some degree of resilience in the context of a geostrategic competition for influence in the emerging AI market. πŸš€

For more details, check out the project: πŸ‘‰ Teuken-7B: opengpt-x.de/en/models…

#AI #OpenSource #Teuken7B #Innovation #MultilingualAI #OpenGPTX #MachineLearning #EuropeanTech πŸ’»πŸŒ

Screenshot of Teuken website

πŸš€ Are rapid technological advancements leaving IT departments playing catch-up? This isn’t a new challenge. It was similar when messenger first came out and corporate IT only provided email. It can be tough to balance day-to-day operations, endless red tape, compliance needs, and procurement processes!

So introducing #ShadowIT: an intriguing phenomenon in today’s workplaces. 🌐 It’s when employees use their own IT apps for work. These days, it could be AI tools that help them be more productive or tackle specific tasks. It’s clear that this is a bit of a mixed bag. There are pros and cons.

With the rapid adoption of Large Language Models (#LLMs), even as IT departments move quickly to bring these powerful tools into the office, employees have access to even cooler stuff on their personal devices. The pace of technological development outside the workplace is relentless these days!

This dynamic opens up again a discussion on how to leverage personal technological discoveries while maintaining security and efficiency at work. Thoughts on balancing these innovations with workplace policies? πŸ’¬ #TechInnovation

A small, tunnel-like structure labeled Shadow IT houses computers in a remote, desert-like landscape under bright sunlight.

If you have to deal with what is called #AI in a professional environment these days, you will often come across the term #RAG. These Retrieval-Augmented Generation systems are intended to address many of the weaknesses of Large-Language Models (#LLM).

These are the core elements:

πŸ” Retriever

  • Fetches relevant documents from external knowledge bases.
  • Utilizes vector representations for efficient text search.
  • Employs methods like keyword-based search, semantic search, or vector search to find pertinent information.

🧩 Augmentation

  • Integrates retrieved data into the model’s input.
  • Filters and structures information for relevance.
  • Prepares data to optimize the generation process.

πŸ’‘ Generator

  • The Large Language Model (LLM) processes both the query and augmented information.
  • Generates responses conditioned on the retrieved data.
  • Delivers the final, synthesized response of the RAG system.

RAG systems help create more accurate, contextual and efficient solutions. This makes them more useful in many areas. They are just one step in the increasingly meaningful application of machine intelligence to real-world use cases.

An abstract digital visualization featuring the letters RAG formed by interconnected nodes and lines, resembling a network or neural connections, set against a gradient background transitioning from pink to purple and blue.

πŸ’‘ A few weeks ago, I posted about #OpenWashing in #AI. What is OpenWashing? 🚨 It’s the practice of labeling content as open source to signal transparency, while the actual openness… leaves much to be desired.

🌟 Now, the Open Source Initiative (OSI) has released version 1.0 of its Open Source AI Definition πŸ“œ, outlining what it truly means for AI to be #OpenSource.

πŸ”‘ Built on the four freedoms of open source software (#OSS) – use, understand, modify, and redistribute – this definition extends these principles to AI systems, including their code, weights, parameters, and documented training data.

πŸ“’ The initiative’s goal? Promote transparency 🌍 and enable reproducibility πŸ”„, ensuring AI systems can be independently reconstructed and adapted. πŸ’‘ Open source AI should allow users to build on and adapt models, fostering innovation and trust.

βš–οΈ While the OSI cannot enforce compliance, it plans to publicly name and shame AI models that falsely claim open source status. This move could influence policy, including the EU’s AI ActπŸ‡ͺπŸ‡Ί, which introduces exemptions and obligations for open-source AI.

✨ Will this definition shape global legislative approaches? Only time will tell. ⏳

While the OSI can’t enforce compliance, it plans to publicly name and shame AI models falsely claiming open-source status. This bold move could influence policies like the EU’s AI Act πŸ‡ͺπŸ‡Ί, which includes exemptions and obligations for open-source AI.

✨ Will this definition shape legislative approaches? Only time will tell. ⏳

πŸ“– Read the OSI’s definition here: The Open Source AI Definition – 1.0

A road sign displaying the words four freedoms stands against a backdrop of a futuristic, illuminated tunnel.

🌟 Rethinking AI: From Misconceptions to Clarity 🌟

The term ‘artificial intelligence’ (AI) still causes a lot of confusion, so we should rebrand to ‘machine intelligence’.

What we call AI doesn’t “understand” in the same way as humans do. It recognises and creates patterns based on probabilistic models. It’s important to understand this difference because it helps us have realistic expectations and make the most of machine intelligence.

Even though this might seem a bit hair-splitting, let’s embrace the nuances and educate others about the true nature of AI. If we do this, we can help to build a more informed and innovative community, driving forward the impressive advances that machine intelligence offers.

If we can cut through the noise and get to the heart of what AI is really about, it’s much easier to work in a meaningful way. And this clarity doesn’t ignore the challenges, such as high resource consumption and potential negative impacts, e.g. on the workforce. But it helps us to talk more clearly and sensibly about what we can use it for and what problems we might run into.

#MachineIntelligence #Innovation #FutureOfWork

A vibrant digital art piece depicts swirling waves of interconnected particles and light, creating a dynamic flow of energy.

While the large and resource-intensive #LargeLanguageModels #LLMs are grabbing headlines, it's likely that 'tiny AI' #tinyAI is already subtly enhancing your daily life.

Thats what I posted almost one year ago.

🌍 Working in innovation across low-resource environments globally, I constantly see the impact that access to cutting-edge technology has in creating opportunities. One technology stirring both excitement and challenge is AI πŸ€–. In this context, the direction I believe holds an underrated promise for responsible real-world impact is open source, light models, and small/edge/embedded solutions.

Why this? In resource-constrained environments three key factors affect the practical and responsible use of AI: hardware requirements and model size, which are closely related. Large AI models and high-demand systems require significant resources, whereas small, edge or embedded devices offer a more efficient alternative. Lightweight models offer accessible, efficient solutions, which is particularly important where infrastructure is a limiting factor. Open source is a concept that has yet to be fully defined in AI, but is essential for the responsible use of the technology.

EdgeAI - AI performed directly on devices - is an interesting opportunity. Here’s why:

πŸ”Ή Data Sovereignty: By keeping data local, we protect sensitive information and align with data sovereignty principles. πŸ”Ή Energy Efficiency: With reduced reliance on cloud computing, we can cut down on energy costs and make solutions more sustainable 🌱. πŸ”Ή Low Latency: Faster data processing leads to more responsive and effective solutionsβ€”crucial in many critical applications.

Therefore, the focus of AI use case development in resource-constrained environments must also be on small applications. This is no less of a technological challenge! Building a future with Edge AI means integrating smart sensors, advanced processors, and optimised data management, supported by smart devices at the edge. Again, also the potential of this flavour of AI is transformative!

See examples for possible applications in this EU supported program: https://edge-ai-tech.eu/

πŸ’‘ #AI #Innovation #EdgeAI #OpenSource #SustainableTech #TechForGood

A small robot greets a much larger robot with a screen displaying code, set against a city street backdrop.

German IT magazine #ct tested the usual suspect LLMs, focusing on how they handle German language content and their energy footprint. The results speak for themselves. The French LLM #Mistral delivered impressive results, almost matching the quality of the market leader chatGPT while consuming far less energy. You can download this model and run it on-premises with just four #H100 cards (although they still cost 30,000 EUR each). A comparable Meta #LLama model requires four times this number. The more cards you use, the higher your energy consumption, which leads to a cost difference of 5.40 EUR per 1 mio token for Mistral, to 34.40 EUR for LLama (!) just to cover the electricity bill (and a lower quality than Mistral, at least in German). Read the complete article here (German, paywall).

Having a super-smart LLM (Large Language Model) ready for a new task, however, requires some specific knowledge to be included. So instead of a complete overhaul of the model, you just tweak it with …