Connect with us

Artificial Intelligence

Deploying Sustainable AI

MIT Technology Review: the importance of Sustainable AI and its impact on energy consumption with Intel’s Zane Ball.

Published

on

Deploying high-performance, energy-efficient AI

Deploying High Performance, Energy Efficient AI 

Although AI is by no means a new technology there have been massive and rapid investments in it and large language models. However, the high-performance computing that powers these rapidly growing AI tools — and enables record automation and operational efficiency — also consumes a staggering amount of energy. With the proliferation of AI comes the responsibility to deploy that AI responsibly and with an eye to sustainability during hardware and software R&D as well as within data centers.

“Enterprises need to be very aware of the energy consumption of their digital technologies, how big it is, and how their decisions are affecting it,” says corporate vice president and general manager of data center platform engineering and architecture at Intel, Zane Ball.

One of the key drivers of a more sustainable AI is modularity, says Ball. Modularity breaks down subsystems of a server into standard building blocks, defining interfaces between those blocks so they can work together. This system can reduce the amount of embodied carbon in a server’s hardware components and allows for components of the overall ecosystem to be reused, subsequently reducing R&D investments.

Downsizing infrastructure within data centers, hardware, and software can also help enterprises reach greater energy efficiency without compromising function or performance. While very large AI models require megawatts of super compute power, smaller, fine-tuned models that operate within a specific knowledge domain can maintain high performance but low energy consumption.

“You give up that kind of amazing general purpose use like when you’re using ChatGPT-4 and you can ask it everything from 17th century Italian poetry to quantum mechanics, if you narrow your range, these smaller models can give you equivalent or better kind of capability, but at a tiny fraction of the energy consumption,” says Ball.

The opportunities for greater energy efficiency within AI deployment will only expand over the next three to five years. Ball forecasts significant hardware optimization strides, the rise of AI factories — facilities that train AI models on a large scale while modulating energy consumption based on its availability — as well as the continued growth of liquid cooling, driven by the need to cool the next generation of powerful AI innovations.

“I think making those solutions available to our customers is starting to open people’s eyes how energy efficient you can be while not really giving up a whole lot in terms of the AI use case that you’re looking for.”

This episode of Business Lab is produced in partnership with Intel.

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic is building a better AI architecture. Going green isn’t for the faint of heart, but it’s also a pressing need for many, if not all enterprises. AI provides many opportunities for enterprises to make better decisions, so how can it also help them be greener?

Two words for you: Sustainable AI.

My guest is Zane Ball, corporate vice president and general manager of data center platform engineering and architecture at Intel.

This podcast is produced in partnership with Intel.

Welcome Zane.

Zane Ball: Good morning.

Laurel: So to set the stage for our conversation, let’s start off with the big topic. As AI transforms businesses across industries, it brings the benefits of automation and operational efficiency, but that high-performance computing also consumes more energy. Could you give an overview of the current state of AI infrastructure and sustainability at the large enterprise level?

Zane: Absolutely. I think it helps to just kind of really zoom out big picture, and if you look at the history of IT services maybe in the last 15 years or so, obviously computing has been expanding at a very fast pace. And the good news about that history of the last 15 years or so, is while computing has been expanding fast, we’ve been able to contain the growth in energy consumption overall.

There was a great study a couple of years ago in Science Magazine that talked about how compute had grown by maybe 550% over a decade, but that we had just increased electricity consumption by a few percent. So those kind of efficiency gains were really profound. So I think the way to kind of think about it is computing’s been expanding rapidly, and that of course creates all kinds of benefits in society, many of which reduce carbon emissions elsewhere.

But we’ve been able to do that without growing electricity consumption all that much. And that’s kind of been possible because of things like Moore’s Law, Big Silicon has been improving with every couple of years and make devices smaller, they consume less power, things get more efficient.

That’s part of the story. Another big part of this story is the advent of these hyperscale data centers. So really, really large-scale computing facilities, finding all kinds of economies of scale and efficiencies, high utilization of hardware, not a lot of idle hardware sitting around.

That also was a very meaningful energy efficiency. And then finally this development of virtualization, which allowed even more efficient utilization of hardware. So those three things together allowed us to kind of accomplish something really remarkable. And during that time, we also had AI starting to play, I think since about 2015, AI workloads started to play a pretty significant role in digital services of all kinds.

But then just about a year ago, ChatGPT happens and we have a non-linear shift in the environment and suddenly large language models, probably not news to anyone on this listening to this podcast, has pivoted to the center and there’s just a breakneck investment across the industry to build very, very fast. And what is also driving that is that not only is everyone rushing to take advantage of this amazing large language model kind of technology, but that technology itself is evolving very quickly. And in fact also quite well known, these models are growing in size at a rate of about 10x per year.

So the amount of compute required is really sort of staggering. And when you think of all the digital services in the world now being infused with AI use cases with very large models, and then those models themselves growing 10x per year, we’re looking at something that’s not very similar to that last decade where our efficiency gains and our greater consumption were almost penciling out.

Now we’re looking at something I think that’s not going to pencil out. And we’re really facing a really significant growth in energy consumption in these digital services. And I think that’s concerning. And I think that means that we’ve got to take some strong actions across the industry to get on top of this. And I think just the very availability of electricity at this scale is going to be a key driver. But of course many companies have net-zero goals. And I think as we pivot into some of these AI use cases, we’ve got work to do to square all of that together.

Laurel: Yeah, as you mentioned, the challenges are trying to develop sustainable AI and making data centers more energy efficient. So could you describe what modularity is and how a modularity ecosystem can power a more sustainable AI?

Zane: Yes, I think over the last three or four years, there’ve been a number of initiatives. Intel’s played a big part of this as well of re-imagining how servers are engineered into modular components. And really modularity for servers is just exactly as it sounds.

We break different subsystems of the server down into some standard building blocks, define some interfaces between those standard building blocks so that they can work together. And that has a number of advantages. Number one, from a sustainability point of view, it lowers the embodied carbon of those hardware components.

Some of these hardware components are quite complex and very energy intensive to manufacture. So imagine a 30 layer circuit board, for example, is a pretty carbon intensive piece of hardware. I don’t want the entire system, if only a small part of it needs that kind of complexity. I can just pay the price of the complexity where I need it.

And by being intelligent about how we break up the design in different pieces, we bring that embodied carbon footprint down. The reuse of pieces also becomes possible. So when we upgrade a system, maybe to a new telemetry approach or a new security technology, there’s just a small circuit board that has to be replaced versus replacing the whole system.

Or maybe a new microprocessor comes out and the processor module can be replaced without investing in new power supplies, new chassis, new everything. And so that circularity and reuse becomes a significant opportunity. And so that embodied carbon aspect, which is about 10% of carbon footprint in these data centers can be significantly improved. And another benefit of the modularity, aside from the sustainability, is it just brings R&D investment down.

So if I’m going to develop a hundred different kinds of servers, if I can build those servers based on the very same building blocks just configured differently, I’m going to have to invest less money, less time. And that is a real driver of the move towards modularity as well.

Laurel: So what are some of those techniques and technologies like liquid cooling and ultrahigh dense compute that large enterprises can use to compute more efficiently? And what are their effects on water consumption, energy use, and overall performance as you were outlining earlier as well?

Zane: Yeah, those are two I think very important opportunities. And let’s just take them one at a  time. Emerging AI world, I think liquid cooling is probably one of the most important low hanging fruit opportunities. So in an air cooled data center, a tremendous amount of energy goes into fans and chillers and evaporative cooling systems. And that is actually a significant part.

So if you move a data center to a fully liquid cooled solution, this is an opportunity of around 30% of energy consumption, which is sort of a wow number. I think people are often surprised just how much energy is burned. And if you walk into a data center, you almost need ear protection because it’s so loud and the hotter the components get, the higher the fan speeds get, and the more energy is being burned in the cooling side and liquid cooling takes a lot of that off the table.

What offsets that is liquid cooling is a bit complex. Not everyone is fully able to utilize it. There’s more upfront costs, but actually it saves money in the long run. So the total cost of ownership with liquid cooling is very favorable, and as we’re engineering new data centers from the ground up. Liquid cooling is a really exciting opportunity and I think the faster we can move to liquid cooling, the more energy that we can save.

But it’s a complicated world out there. There’s a lot of different situations, a lot of different infrastructures to design around. So we shouldn’t trivialize how hard that is for an individual enterprise. One of the other benefits of liquid cooling is we get out of the business of evaporating water for cooling. A lot of North America data centers are in arid regions and use large quantities of water for evaporative cooling.

That is good from an energy consumption point of view, but the water consumption can be really extraordinary. I’ve seen numbers getting close to a trillion gallons of water per year in North America data centers alone. And then in humid climates like in Southeast Asia or eastern China for example, that evaporative cooling capability is not as effective and so much more energy is burned.

And so if you really want to get to really aggressive energy efficiency numbers, you just can’t do it with evaporative cooling in those humid climates. And so those geographies are kind of the tip of the spear for moving into liquid cooling.

The other opportunity you mentioned was density and bringing higher and higher density of computing has been the trend for decades. That is effectively what Moore’s Law has been pushing us forward. And I think it’s just important to realize that’s not done yet.

As much as we think about racks of GPUs and accelerators, we can still significantly improve energy consumption with higher and higher density traditional servers that allows us to pack what might’ve been a whole row of racks into a single rack of computing in the future. And those are substantial savings. And at Intel, we’ve announced we have an upcoming processor that has 288 CPU cores and 288 cores in a single package enables us to build racks with as many as 11,000 CPU cores.

So the energy savings there is substantial, not just because those chips are very, very efficient, but because the amount of networking equipment and ancillary things around those systems is a lot less because you’re using those resources more efficiently with those very high dense components. So continuing, if perhaps even accelerating our path to this ultra-high dense kind of computing is going to help us get to the energy savings we need maybe to accommodate some of those larger models that are coming.

Laurel: Yeah, that definitely makes sense. And this is a good segue into this other part of it, which is how data centers and hardware as well software can collaborate to create greater energy efficient technology without compromising function. So how can enterprises invest in more energy efficient hardware such as hardware-aware software, and as you were mentioning earlier, large language models or LLMs with smaller downsized infrastructure but still reap the benefits of AI?

Zane: I think there are a lot of opportunities, and maybe the most exciting one that I see right now is that even as we’re pretty wowed and blown away by what these really large models are able to do, even though they require tens of megawatts of super compute power to do, you can actually get a lot of those benefits with far smaller models as long as you’re content to operate them within some specific knowledge domain. So we’ve often referred to these as expert models. So take for example an open source model like the Llama 2 that Meta produced. So there’s like a 7 billion parameter version of that model.

There’s also, I think, a 13 and 70 billion parameter versions of that model compared to a GPT-4, maybe something like a trillion element model. So it’s far, far, far smaller, but when you fine tune that model with data to a specific use case, so if you’re an enterprise, you’re probably working on something fairly narrow and specific that you’re trying to do.

Maybe it’s a customer service application or it’s a financial services application, and you as an enterprise have a lot of data from your operations, that’s data that you own and you have the right to use to train the model. And so even though that’s a much smaller model, when you train it on that domain specific data, the domain specific results can be quite good in some cases even better than the large model.

So you give up that kind of amazing general purpose use like when you’re using ChatGPT-4 and you can ask it everything from 17th century Italian poetry to quantum mechanics, if you narrow your range, these smaller models can give you equivalent or better kind of capability, but at a tiny fraction of the energy consumption.

And we’ve demonstrated a few times, even with just a standard Intel Xeon two socket server with some of the AI acceleration technologies we have in those systems, you can actually deliver quite a good experience. And that’s without even any GPUs involved in the system. So that’s just good old-fashioned servers and I think that’s pretty exciting.

That also means the technology’s quite accessible, right? So you may be an enterprise, you have a general purpose infrastructure that you use for a lot of things, you can use that for AI use cases as well. And if you’ve taken advantage of these smaller models that fit within infrastructure we already have or infrastructure that you can easily obtain. And so those smaller models are pretty exciting opportunities.

And I think that’s probably one of the first things the industry will adopt to get energy consumption under control is just right sizing the model to the activity to the use case that we’re targeting. I think there’s also… you mentioned the concept of hardware-aware software. I think that the collaboration between hardware and software has always been an opportunity for significant efficiency gains.

I mentioned early on in this conversation how virtualization was one of the pillars that gave us that kind of fantastic result over the last 15 years. And that was very much exactly that. That’s bringing some deep collaboration between the operating system and the hardware to do something remarkable. And a lot of the acceleration that exists in AI today actually is a similar kind of thinking, but that’s not really the end of the hardware software collaboration. We can deliver quite stunning results in encryption and in memory utilization in a lot of areas.

And I think that that’s got to be an area where the industry is ready to invest. It is very easy to have plug and play hardware where everyone programs at a super high level language, nobody thinks about the impact of their software application downstream. I think that’s going to have to change. We’re going to have to really understand how our application designs are impacting energy consumption going forward. And it isn’t purely a hardware problem. It’s got to be hardware and software working together.

Laurel: And you’ve outlined so many of these different kind of technologies. So how can enterprise adoption of things like modularity and liquid cooling and hardware aware software be incentivized to actually make use of all these new technologies?

Zane: A year ago, I worried a lot about that question. How do we get people who are developing new applications to just be aware of the downstream implications? One of the benefits of this revolution in the last 12 months is I think just availability of electricity is going to be a big challenge for many enterprises as they seek to adopt some of these energy intensive applications. And I think the hard reality of energy availability is going to bring some very strong incentives very quickly to attack these kinds of problems.

But I do think beyond that like a lot of areas in sustainability, accounting is really important. There’s a lot of good intentions. There’s a lot of companies with net-zero goals that they’re serious about. They’re willing to take strong actions against those goals. But if you can’t accurately measure what your impact is either as an enterprise or as a software developer, I think you have to kind of find where the point of action is, where does the rubber meet the road where a micro decision is being made.

And if the carbon impact of that is understood at that point, then I think you can see people take the actions to take advantage of the tools and capabilities that are there to get a better result. And so I know there’s a number of initiatives in the industry to create that kind of accounting, and especially for software development, I think that’s going to be really important.

Laurel: Well, it’s also clear there’s an imperative for enterprises that are trying to take advantage of AI to curb that energy consumption as well as meet their environmental, social, and governance or ESG goals. So what are the major challenges that come with making more sustainable AI and computing transformations?

Zane: It’s a complex topic, and I think we’ve already touched on a couple of them. Just as I was just mentioning, definitely getting software developers to understand their impact within the enterprise. And if I’m an enterprise that’s procuring my applications and software, maybe cloud services, I need to make sure that accounting is part of my procurement process, that in some cases that’s gotten easier.

In some cases, there’s still work to do. If I’m operating my own infrastructure, I really have to look at liquid cooling, for example, an adoption of some of these more modern technologies that let us get to significant gains in energy efficiency. And of course, really looking at the use cases and finding the most energy efficient architecture for that use case. For example, like using those smaller models that I was talking about. Enterprises need to be very aware of the energy consumption of their digital technologies, how big it is and how their decisions are affecting it.

Laurel: So could you offer an example or use case of one of those energy efficient AI driven architectures and how AI was subsequently deployed for it?

Zane: Yes. I think that some of the best examples I’ve seen in the last year were really around these smaller models where Intel did an example that we published around financial services, and we found that so

Read More

Artificial Intelligence

Lets Govern AI Rather Than Let It Govern Us

Published

on

aihumanrights

A pivotal moment has unfolded at the United Nations General Assembly. For the first time, a resolution was adopted focused on ensuring Artificial Intelligence (AI) systems are “safe, secure and trustworthy”, marking a significant step towards integrating AI with sustainable development globally. This initiative, led by the United States and supported by an impressive cohort of over 120 other Member States, underscores a collective commitment to navigating the AI landscape with the utmost respect for human rights.

But why does this matter to us, the everyday folks? AI isn’t just about robots from sci-fi movies anymore; it’s here, deeply embedded in our daily lives. From the recommendations on what to watch next on Netflix to the virtual assistant in your smartphone, AI’s influence is undeniable. Yet, as much as it simplifies tasks, the rapid evolution of AI also brings forth a myriad of concerns – privacy issues, ethical dilemmas and the very fabric of our job market being reshaped.

The Unanimous Call for Responsible AI Governance

The resolution highlights a crucial understanding: the rights we hold dear offline must also be protected in the digital realm, throughout the lifecycle of AI systems. It’s a call to action for not just countries but companies, civil societies, researchers, and media outlets to develop and support governance frameworks that ensure the safe and trustworthy use of AI. It acknowledges the varying levels of technological development across the globe and stresses the importance of supporting developing countries to close the digital divide and bolster digital literacy.

The United States Ambassador to the UN, Linda Thomas-Greenfield, shed light on the inclusive dialogue that led to this resolution. It’s seen as a blueprint for future discussions on the challenges AI poses, be it in maintaining peace, security, or responsible military use. This resolution isn’t about stifling innovation; rather, it’s about ensuring that as we advance, we do so with humanity, dignity, and a steadfast commitment to human rights at the forefront.

This unprecedented move by the UN General Assembly is not just a diplomatic achievement; it’s a global acknowledgment that while AI has the potential to transform our world for the better, its governance cannot be taken lightly. The resolution, co-sponsored by countries including China, represents a united front in the face of AI’s rapid advancement and its profound implications.

Bridging the Global Digital Divide

As we stand at this crossroads, the message is clear: the journey of AI is one we must steer with care, ensuring it aligns with our shared values and aspirations. The resolution champions a future where AI serves as a force for good, propelling us towards the Sustainable Development Goals, from eradicating poverty to ensuring quality education for all.

aiunitednations

The emphasis on cooperation, especially in aiding developing nations to harness AI, underscores a vision of a world where technological advancement doesn’t widen the gap between nations but brings us closer to achieving global equity. It’s a reminder that in the age of AI, our collective wisdom, empathy, and collaboration are our most valuable assets.

Ambassador Thomas-Greenfield’s remarks resonate with a fundamental truth: the fabric of our future is being woven with threads of artificial intelligence. It’s imperative that we, the global community, hold the loom. The adoption of this resolution is not the end, but a beginning—a stepping stone towards a comprehensive framework where AI enriches lives without compromising our moral compass.

At the heart of this resolution is the conviction that AI, though devoid of consciousness, must operate within the boundaries of our collective human conscience. The call for AI systems that respect human rights isn’t just regulatory rhetoric; it’s an appeal for empathy in algorithms, a plea to encode our digital evolution with the essence of what it means to be human.

This brings to light a pertinent question: How do we ensure that AI’s trajectory remains aligned with human welfare? The resolution’s advocacy for cooperation among nations, especially in supporting developing countries, is pivotal. It acknowledges that the AI divide is not just a matter of technological access but also of ensuring that all nations have a voice in shaping AI’s ethical landscape. By fostering an environment where technology serves humanity universally, we inch closer to a world where AI’s potential is not a privilege but a shared global heritage.

Furthermore, the resolution’s emphasis on bridging the digital divide is a clarion call for inclusivity in the digital age. It’s a recognition that the future we craft with AI should be accessible to all, echoing through classrooms in remote villages and boardrooms in bustling cities alike. The initiative to equip developing nations with AI tools and knowledge is not just an act of technological philanthropy; it’s an investment in a collective future where progress is measured not by the advancements we achieve but by the lives we uplift.

Uniting for a Future Shaped by Human Values

The global consensus on this resolution, with nations like the United States and China leading the charge, signals a watershed moment in international diplomacy. It showcases a rare unity in the quest to harness AI’s potential responsibly, amidst a world often divided by digital disparities. The resolution’s journey, from conception to unanimous adoption, reflects a world waking up to the reality that in the age of AI, our greatest strength lies in our unity.

As we stand at the dawn of this new era, the resolution serves as both a compass and a beacon; a guide to navigate the uncharted waters of AI governance and a light illuminating the path to a future where technology and humanity converge in harmony. The unanimous adoption of this resolution is not just a victory for diplomacy; it’s a promise of hope, a pledge that in the symphony of our future, technology will amplify, not overshadow, the human spirit.

In conclusion, “Let’s Govern AI Rather Than Let It Govern Us” is more than a motto; it’s a mandate for the modern world. It’s a call to action for every one of us to participate in shaping a future where AI tools are wielded with wisdom, wielded to weave a tapestry of progress that reflects our highest aspirations and deepest values.

Read More

Continue Reading

Artificial Intelligence

KASBA.AI Now Available on ChatGPT Store

ChatGPT Store by OpenAI is the new platform for developers to create and share unique AI models with monetization opportunities

Published

on

chatgpt store kasba.ai

OpenAI, the leading Artificial Intelligence research laboratory has taken a significant step forward with the launch of the ChatGPT Store. This new platform allows developers to create and share their unique AI models, expanding the capabilities of the already impressive ChatGPT. Among the exciting additions to the store are Canva, Veed, Alltrails and now KASBA.AI with many more entering every day.

About OpenAI

OpenAI, founded by Elon Musk and Sam Altman, has always been at the forefront of AI research. With a mission to ensure that artificial general intelligence benefits all of humanity, they have consistently pushed the boundaries of what is possible in the field.

OpenAI’s ChatGPT has already changed the way we interact with technology with its ability to generate coherent and contextually relevant responses. Now, with the ChatGPT Store, OpenAI is aiming to empower developers and non technical users to contribute and build upon this powerful platform.

kasba.ai chatgpt store

What is the ChatGPT Store?

The ChatGPT Store is an exciting initiative that allows developers to create, share and in time monetise their unique AI models. It serves as a marketplace for AI models that can be integrated with ChatGPT.

This means that users can now have access to a wide range of specialised conversational AI models, catering to their specific needs. The ChatGPT Store opens up a world of possibilities, making AI more accessible and customisable than ever before.

chatgpt store

Key Features of the ChatGPT Store

Some unique features of the store include customisable AI models, pre trained models for quick integration and the ability for developers to earn money by selling their models.

Developers can also leverage the rich ecosystem of tools and resources provided by OpenAI to enhance their models. This collaborative marketplace fosters innovation and encourages the development of conversational AI that can cater to various industries and use cases.

Impact on Industries and Society

The launch of the ChatGPT Store has far reaching implications for industries and society as a whole. By making AI models more accessible and customisable, businesses can now leverage conversational AI to enhance customer support, automate repetitive tasks and improve overall efficiency.

From healthcare and finance to education and entertainment the impact of AI on various sectors will only grow with the availability of specialised models on the ChatGPT Store. This democratisation of conversational AI technology will undoubtedly pave the way for a more connected and efficient world.

Ethical Considerations

As with any technological advancement, ethical considerations are crucial. OpenAI places a strong emphasis on responsible AI development and encourages developers to adhere to guidelines and principles that prioritize user safety and privacy. The ChatGPT Store ensures that AI models are vetted and reviewed to maintain high standards.

OpenAI is committed to continuously improving the user experience, and user feedback plays a vital role in shaping the future of AI development. For specific concerns regarding AI and data protection visit Data Protection Officer on ChatGPT Store.

dpo chatgpt store

KASBA.AI on ChatGPT Store

One of the most exciting additions to the ChatGPT Store is KASBA.AI, your guide to the latest AI tool reviews, news, AI governance and learning resources. From answering questions to providing recommendations, KASBA.AI hopes to deliver accurate and contextually relevant responses. Its advanced algorithms and state of the art natural language processing make it a valuable asset to anyone looking for AI productivity tools in the marketplace.

Takeaway

OpenAI’s ChatGPT Store represents an exciting leap forward in the world of conversational AI. With customisable models, the ChatGPT Store empowers developers to create AI that caters to specific needs, with the potential to propel industries and society to new horizons..

OpenAI’s commitment to responsible AI development should ensure that user safety and privacy are prioritised; lets keep an eye here! Meanwhile as we traverse this new era of conversational AI, ChatGPT Store will undoubtedly shape the future of how we interact with technology in time to come with potentially infinite possibilities.

Continue Reading

Artificial Intelligence

Two AIs Get Chatty: A Big Leap at UNIGE

Published

on

two ais get chatty

Chatting AIs: How It All Started

Scientists at the University of Geneva (UNIGE) have done something super cool. They’ve made an AI that can learn stuff just by hearing it and then can pass on what it’s learned to another AI. It’s like teaching your friend how to do something just by talking to them. This is a big deal because it’s kind of like how we, humans, learn and share stuff with each other, but now machines are doing it too!

Two AIs Get Chatty By Taking Cues from Our Brains

This whole idea came from looking at how our brains work. Our brains have these things called neurons that talk to each other with electrical signals, and that’s how we learn and remember things. The UNIGE team made something similar for computers, called artificial neural networks. These networks help computers understand and use human language, which is pretty awesome.

How Do AIs Talk to Each Other?

For a long time, getting computers to learn new things just from words and then teach those things to other computers was super hard. It’s easy for us humans to learn something new, figure it out, and then explain it to someone else. But for computers? Not so much. That’s why what the UNIGE team did is such a big step forward. They’ve made it possible for one AI to learn a task and then explain it to another AI, all through chatting.

two ais get chatty cool

Learning Like Us

The secret here is called Natural Language Processing (NLP). NLP is all about helping computers understand human talk or text. This is what lets AIs get what we’re saying and then do something with that info. The UNIGE team used NLP to teach their AI how to understand instructions and then act on them. But the real magic is that after learning something new, this AI can turn around and teach it to another AI, just like how you might teach your friend a new trick.

Breaking New Ground in AI Learning

The UNIGE team didn’t just stop at making an AI that learns from chatting. They took it a step further. After one AI learns a task, it can explain how to do that task to another AI. Imagine you figured out how to solve a puzzle and then told your friend how to do it too. That’s what these AIs are doing, but until now, this was super hard to achieve with machines.

From Learning to Teaching

The team started with a really smart AI that already knew a lot about language. They hooked it up to a simpler AI, kind of like giving it a buddy to chat with. First, they taught the AI to understand language, like training it to know what we mean when we talk. Then, they moved on to getting the AI to do stuff based on what it learned from words alone. But here’s the kicker: after learning something new, this AI could explain it to its buddy AI in a way that the second one could get it and do the task too.

A Simple Task, A Complex Achievement

The tasks themselves might seem simple, like identifying which side a light was flashing on. But it’s not just about the task; it’s about understanding and teaching it, which is huge for AI. This was the first time two AIs communicated purely through language to share knowledge. It’s like if one robot could teach another robot how to dance just by talking about it. Pretty amazing, right?

Why This Matters

This is a big deal for the future. It’s not just about AIs chatting for fun; it’s about what this means for robots and technology down the line. Imagine robots that can learn tasks just by listening to us and then teach other robots how to do those tasks. It could change how we use robots in homes, hospitals, or even in space. Instead of programming every single step, we could just tell them what we need, and they’d figure it out and help each other out too. It’s like having a team of robots that learn from each other and us, making them way more useful and flexible.

The UNIGE team is already thinking about what’s next. Their AI network is still pretty small, but they believe they can make it bigger and better. We’re talking about robots that not only understand and learn from us but also from each other. This could lead to robots that are more like partners, helping solve problems, invent new things, and maybe even explore the universe with us.

What’s the Future?

This adventure isn’t just about what’s happening now. It’s about opening the door to a future where robots really get us, and each other. The UNIGE team’s work is super exciting for anyone who’s into robots. It’s all about making it possible for machines to have chats with each other, which is a big deal for making smarter, more helpful robots.

The brains behind this project say they’ve just started. They’ve got a small network of AI brains talking, but they’re dreaming big. They’re thinking about making even bigger and smarter networks. Imagine humanoid robots that don’t just understand what you’re telling them but can also share secrets with each other in their own robot language. The researchers are pretty stoked because there’s nothing stopping them from turning this dream into reality.

So, we’re looking at a future where robots could be our buddies, understanding not just what we say but also how we say it. They could help out more around the house, be there for us when we need someone to talk to, or even work alongside us, learning new things and teaching each other without us having to spell it all out. It’s like having a robot friend who’s always there to learn, help, and maybe even make us laugh.

Wrap up

What started as a project at UNIGE could end up changing how we all live, work, and explore. It’s a glimpse into a future where AIs and robots are more than just tools; they’re part of our team, learning and growing with us. Who knows what amazing things they’ll teach us in return?

Read More

Continue Reading

Latest Reviews

chatgpt chatgpt
ChatGPT2 weeks ago

ChatGPT vs Gemini vs Copilot

Introduction When it comes to writing, researching, or coding there’s no shortage of online tools promising to make your life...

ai tools ai tools
Reviews2 weeks ago

Top 10 AI Voice Generators Entering 2024

We have entered a new era of digital narration gone are the days of robotic and monotonous speech

ai tools ai tools
AI Tools2 weeks ago

Prisma Transforms Photos with AI Powered Artistic Filters

Overview Prisma Labs is a leading technology company specialised in artificial intelligence and computer vision. With their flagship product, the...

ai tools ai tools
AI Tools2 weeks ago

Tickeron AI Powered Trading Simplifying Investment Decisions

AI powered stock trading tool that simplifies decision making with real time market insights technical analysis and trading signals

Synthesia Synthesia
AI Tools2 weeks ago

Synthesia: Great #1 Contender in AI Text to Video Generation?

Overview Synthesia is a powerful Artificial Intelligence voice and video generator tool that allows users to create hyper-realistic videos with...

canva canva
AI Tools3 weeks ago

Canva: The Ultimate AI Graphic Design Tool for Stunning Visuals

Overview Canva is a renowned AI powered graphic design tool that allows users to create stunning visuals and graphics effortlessly....

ai tools ai tools
AI Tools3 weeks ago

Quantconnect: AI Powered Trading Tool for Automated Strategies

An AI powered trading tool offering strategy automation and extensive features

storychief storychief
AI Tools3 weeks ago

StoryChief Eliminates 5 Tools in Content Creation

Looking to streamline your content creation and distribution efforts? Discover StoryChief, the all in one platform for bloggers and content...

kasba.ai grammerly kasba.ai grammerly
AI Tools1 month ago

Grammarly: An Essential Writing Tool for Students, Teachers and Writers

Improve your writing check for grammar correct errors enhance vocabulary and detect plagiarism

ai tools ai tools
AI Tools2 months ago

Explore Artbreeder, the Innovative AI Image Creator

Discover the innovative AI application, Artbreeder, that combines existing artworks to create unique images. Learn about its features and benefits...

Trending

Disclosure: KASBA.AI participates in various affiliate programs which means we may earn a fee on purchases and product links to our partner websites. Copyright © 2024 KASBA.AI