Connect with us

News

The military’s big bet on artificial intelligence

Despite worries about the ethics and safety US Department of Defense has requested $1.8bn for AI in 2024

Published

on

The military’s big bet on artificial intelligence

Number 4 Hamilton Place is a be-columned building in central London, home to the Royal Aeronautical Society and four floors of event space. In May, the early 20th-century Edwardian townhouse hosted a decidedly more modern meeting: Defense officials, contractors, and academics from around the world gathered to discuss the future of military air and space technology.

Things soon went awry. At that conference, Tucker Hamilton, chief of AI test and operations for the United States Air Force, seemed to describe a disturbing simulation in which an AI-enabled drone had been tasked with taking down missile sites. But when a human operator started interfering with that objective, he said, the drone killed its operator, and cut the communications system.

Internet fervor and fear followed. At a time of growing public concern about runaway artificial intelligence, many people, including reporters, believed the story was true. But Hamilton soon clarified that this seemingly dystopian simulation never actually ran. It was just a thought experiment.

“There’s lots we can unpack on why that story went sideways,” said Emelia Probasco, a senior fellow at Georgetown University’s Center for Security and Emerging Technology.

Part of the reason is that the scenario might not actually be that far-fetched: Hamilton called the operator-killing a “plausible outcome” in his follow-up comments. And artificial intelligence tools are growing more powerful — and, some critics say, harder to control.

Despite worries about the ethics and safety of AI, the military is betting big on artificial intelligence. The U.S. Department of Defense has requested $1.8 billion for AI and machine learning in 2024, on top of $1.4 billion for a specific initiative that will use AI to link vehicles, sensors, and people scattered across the world. “The U.S. has stated a very active interest in integrating AI across all warfighting functions,” said Benjamin Boudreaux, a policy researcher at the RAND Corporation and co-author of a report called “Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World.”

Indeed, the military is so eager for new technology that “the landscape is a sort of land grab right now for what types of projects should be funded,” Sean Smith, chief engineer at BlueHalo, a defense contractor that sells AI and autonomous systems, wrote in an email to Undark. Other countries, including China, are also investing heavily in military artificial intelligence.

“The U.S. has stated a very active interest in integrating AI across all warfighting functions.”

While much of the public anxiety about AI has revolved around its potential effects on jobs, questions about safety and security become even more pressing when lives are on the line.

Those questions have prompted early efforts to put up guardrails on AI’s use and development in the armed forces, at home and abroad, before it’s fully integrated into military operations. And as part of an executive order in late October, President Joe Biden mandated the development of a National Security Memorandum that “will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.”

For stakeholders, such efforts need to move forward — even if how they play out, and what future AI applications they’ll apply to, remain uncertain.

“We know that these AI systems are brittle and unreliable,” said Boudreaux. “And we don’t always have good predictions about what effects will actually result when they are in complex operating environments.”


Artificial intelligence applications are already alive and well within the DOD. They include the mundane, like using ChatGPT to compose an article. But the future holds potential applications with the highest and most headline-making stakes possible, like lethal autonomous weapons systems: arms that could identify and kill someone on their own, without a human signing off on pulling the trigger.

The DOD has also been keen on incorporating AI into its vehicles — so that drones and tanks can better navigate, recognize targets and shoot weapons. Some fighter jets already have AI systems that stop them from colliding with the ground.

The Defense Advanced Research Projects Agency, or DARPA — one of the defense department’s research and development organizations —sponsored a program in 2020 that pitted an AI pilot against a human one. In five simulated “dogfights” — mid-air battles between fighter jets — the human flyer fought Top-Gun-style against the algorithm flying the same simulated plane. The setup looked like a video game, with two onscreen jets chasing each other through the sky. The AI system won, shooting down the digital jets with the human at the helm. Part of the program’s aim, according to DARPA, was to get pilots to trust and respect the bots, and to set the stage for more human-machine collaboration in the sky.

Those are all pretty hardware-centric applications of the technology. But according to Probasco, the Georgetown fellow, software-enabled computer vision — the ability of AI to glean meaningful information from a picture — is changing the way the military deals with visual data. Such technology could be used to, say, identify objects in a spy satellite image. “It takes a really long time for a human being to look at each slide and figure out something that’s changed or something that’s there,” she said.

Smart computer systems can speed that interpretation process up, by flagging changes (military trucks appeared where they weren’t before) or specific objects of interest (that’s a fighter jet), so a human can take a look. “It’s almost like we made AI do ‘Where’s Waldo?’ for the military,” said Probasco, who researches how to create trustworthy and responsible AI for national security applications.

In a similar vein, the Defense Innovation Unit — which helps the Pentagon take advantage of commercial technologies — has run three rounds of a computer-vision competition called xView, asking companies to do automated image analysis related to topics including illegal fishing or disaster response. The military can talk openly about this humanitarian work, which is unclassified, and share its capabilities and information with the world. But that is only fruitful if the world deems its AI development robust. The military needs to have a reputation for solid technology. “It’s about having people outside of the U.S. respect us and listen to us and trust us,” said University of California, San Diego professor of data science and philosophy David Danks.

That kind of trust is going to be particularly important in light of an overarching military AI application: a program called Joint All-Domain Command and Control, or JADC2, which aims to integrate data from across the armed forces. Across the planet, instruments on ships, satellites, planes, tanks, trucks, drones, and more are constantly slugging down information on the signals around them, whether those are visual, audio, or come in forms human beings can’t sense, like radio waves. Rather than siloing, say, the Navy’s information from the Army’s, their intelligence will be hooked into one big brain to allow coordination and protect assets. In the past, said Probasco, “if I wanted to shoot something from my ship, I had to detect it with my radar.” JADC2 is a long-term project that’s still in development. But the idea is that once it’s operational, a person could use data from some other radar (or satellite) to enact that lethal force. Humans would still look at the AI’s interpretation, and determine whether to pull the trigger (an autonomous distinction that may not mean much to a “target”).

Probasco’s career actually began in the Navy, where she served on a ship that used a weapons system called Aegis. “At the time, we didn’t call it AI, but now if you were to look at the various definitions, it qualified,” she said. Having that experience as a young adult has shaped her work — which has taken her to the Pentagon and to the Johns Hopkins University Applied Physics Laboratory — in that she always keeps in mind the 18-year-olds, standing at weapons control systems, trying to figure out how to use them responsibly. “That’s more than a technical problem,” she said. “It’s a training problem; it’s a human problem.”

JADC2 could also potentially save people and objects, a moral calculus the DOD is, of course, in the business of doing. It would theoretically make an attack on any given American warship (or jet or satellite) less impactful, since other sensors and weapons could fill in for the lost platform. On the flip side, though, a unified system could also be more vulnerable to problems like cyberattack: With all the information flowing through connected channels, a little corruption could go a long way.

Those are large ambitions — and ones whose achievability and utility even some experts doubt — but the military likes to think big. And so do its employees. In the wild and wide world of military AI development, one person working on an AI project for the Defense Department is a theoretical physicist named Alex Alaniz. Alaniz spends much of his time behind the fences of Kirtland Air Force Base, at the Air Force Research Lab in Albuquerque, New Mexico.

By day, he’s the program manager for the Weapons Engagement Optimizer, an AI system that could help humans manage information in a conflict — like what tracks missiles are taking. Currently, it’s used to play hypothetical scenarios called war games. Strategists use war games to predict how a situation might play out. But Alaniz’s employer has recently taken an interest in a different AI project Alaniz has been pursuing: In his spare time, he has built a chatbot that attempts to converse like Einstein would.

According to Emelia Probasco, the ability of AI to glean meaningful information from a picture is changing the way the military deals with visual data.

To make the Einstein technology, Alaniz snagged the software code that powered an openly available chatbot online, improved its innards, and then made 101 copies of it. He subsequently programmed and trained each clone to be an expert in one aspect of Einstein’s life — one bot knew about Einstein’s personal details, another his theories of relativity, yet another his interest in music, his thoughts on WWII. Among the chatbots’ expertise, there can be crossover.

Then, he linked them together, forging associations between different topics and pieces of information. He created, in other words, a digital network that he thinks could lead toward artificial general intelligence, and the kinds of connections humans make.

Such a technology could be applied to living people, too: If wargamers created realistic, interactive simulations of world leaders — Putin, for example — their simulated conflicts could, in theory, be truer to life and inform decision-making. And even though the technology is currently Alaniz’s, not the Air Force Research Lab’s, and no one knows its ultimate applications, the organization is interested in showing it off, which makes sense in the land-grab landscape Smith described. In a LinkedIn post promoting a talk Alaniz gave at the lab’s Inspire conference in October, the organization said Alaniz’s Einstein work “pushes our ideas of intelligence and how it can be used to approach digital systems better.”

What he really wants, though, is to evolve the Einstein into something that can think like a human: the vaunted artificial general intelligence, the kind that learns and thinks like a biological human. That’s something the majority of Americans assertively don’t want. According to a recent poll from the Artificial Intelligence Policy Institute, 63 percent of Americans think “regulation should aim to actively prevent AI superintelligence.”

Smith, the BlueHalo engineer, has used Einstein and seen how its gears turn. He thinks the technology is most likely to become a kind of helper for the large language models — like ChatGPT — that have exploded recently. The more constrained and controlled set of information that a bot like Einstein has access to could define the boundaries of the large language model’s responses, and provide checks on its answers. And those training systems and guardrails, Smith said, could help prevent them from sending “delusional responses that are inaccurate.”


Making sure large language models don’t “hallucinate” wrong information is one concern about AI technology. But the worries extend to more dire scenarios, including an apocalypse triggered by rogue AI. Recently, leading scientists and technologists signed onto a statement that said, simply, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Not long before that, in March 2023, other luminaries — including Steve Wozniak, co-founder of Apple, and Max Tegmark of the AI Institute for Artificial Intelligence and Fundamental Interactions at MIT — had signed an open letter titled “Pause Giant AI Experiments.” The document advocated doing just that, to determine whether giant AI experiments’ effects are positive and their risks manageable. The signatories suggested the AI community take a six-month hiatus to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

“Society has hit pause on other technologies with potentially catastrophic effects on society,” the signers agreed, citing endeavors like human cloning, eugenics and gain-of-function research in a footnote. “We can do so here.”

That pause didn’t happen, and critics like the University of Washington’s Emily Bender have called the letter itself an example of AI hype — the tendency to overstate systems’ true abilities and nearness to human cognition.

Still, government leaders should be concerned about negative reactions, said Boudreaux, the RAND researcher. “There’s a lot of questions and concerns about the role of AI, and it’s all the more important that the U.S. demonstrate its commitment to the responsible deployment of these technologies,” he said.

The idea of a rebellious drone killing its operator might fuel dystopian nightmares. And while that scenario may be something humans clearly want to avoid, national security AI can present more subtle ethical quandaries — particularly since the military operates according to a different set of rules.

For example, “There’s things like intelligence collection, which are considered acceptable behavior by nations,” said Neil Rowe, a professor of computer science at the Naval Postgraduate School. A country could surveil an embassy within its borders in accordance with the 1961 Vienna Convention, but tech companies are bound by a whole different set of regulations; Amazon’s Alexa can’t just record and store conversations it overhears without permission — at least if the company doesn’t want to get into legal trouble. And when a soldier kills a combatant in a military conflict, it’s considered to be a casualty, not a murder, according to the Law of Armed Conflict; exemptions to the United Nations Charter legally allow lethal force in certain circumstances.

“Ethics is about what values we have,” said Danks, the UCSD professor. “How we realize those values through our actions, both individual and collective.” In the national security world, he continues, people tend to agree about the values — safety, security, privacy, respect for human rights — but “there is some disagreement about how to reconcile conflicts between values.”

National security AI can present more subtle ethical quandaries — particularly since the military operates according to a different set of rules.

Danks is among a small community of researchers — including philosophers and computer scientists — who have spent years exploring those conflicts in the national security sector and trying to find potential solutions.

He first got involved in AI ethics and policy work in part because an algorithm he had helped develop to understand the structure of the human brain was likely being used within the intelligence community, because, he says, the brain’s internal communications resembled those between nodes of a terrorist cell. “That isn’t what we designed it for at all,” said Danks.

“I feel guilty that I wasn’t awake to it prior to this,” he added. “But it was an awakening that just because I know how I want to use an algorithm I develop, I can’t assume that that’s the only way it could be used. And so then what are my obligations as a researcher?”

Danks’ work in this realm began with a military focus and then, when he saw how much AI was shaping everyday life, also generalized out to things like autonomous cars. Today, his research tends to focus on issues like bias in algorithms, and transparency in AI’s “thinking.”

But, of course, there is no one AI and no one kind of thinking. There are, for instance, machine-learning algorithms that do pattern-recognition or prediction, and logical inference methods that simply provide yes-or-no answers given a set of facts. And Rowe contends that militaries have largely failed to separate what ethical use looks like for different kinds of algorithms, a gap he addressed in a 2022 paper published in Frontiers in Big Data.

One important ethical consideration that differs across AIs, Rowe writes, is whether a system can explain its decisions. If a computer decides that a truck is an enemy, for example, it should be able to identify which aspects of the truck led to that conclusion. “Usually if it’s a good explanation, everybody should be able to understand,” said Rowe.

In the “Where’s Waldo” kind of computer vision Probasco mentioned, for example, a system should be able to explain why it tagged a truck as military, or a jet as a “fighter.” A human could then review the tag and determine whether it was sufficient — or flawed. For example, for the truck identified as a military vehicle, the AI might explain its reasoning by pointing to its matching color and size. But, in reviewing that explanation, a human might see it failed to look at the license plate, tail number, or manufacturer logos that would reveal it to be a commercial model.

Neural networks, which are often used for facial recognition or other computer-vision tasks, can’t give those explanations well. That inability to reason out their response suggests, says Rowe, that these kinds of algorithms won’t lead to artificial general intelligence — the human-like kind — because humans can (usually) provide rationale. For both the ethical reasons, and for the potential limits to its generality, Rowe said, neural networks can be problematic for the military. The Army, the Air Force, and the Navy have all already embarked on neural-network projects.

For algorithm types that can explain themselves, Rowe suggests the military program them to display calculations relating to specific ethical concerns, like how many civilian casualties are expected from a given operation. Humans could then survey those variables, weigh them, and make a choice.

One important ethical consideration that differs across AIs, Neil Rowe writes, is whether a system can explain its decisions.

How variables get weighted will also come into play with, for example, AI pilots — as it does in autonomous cars. If, for example, the collision avoidance systems must choose between hitting a military aircraft from the aircraft’s own side, with a human aboard, or a private plane of foreign civilians, programmers need to think through what the “right” decision is in that case ahead of time. AI developers in the commercial sector have to ask similar questions about autonomous vehicles, for example. But in general, the military deals with life and death more often than private companies. “In the military context, I think it’s much more common that we’re using technology to help us make, as it were, impossible choices,” said Danks.

Of course, engaging in operations that make those tradeoffs at all is an ethical calculation all its own. As with the example above, some ethical frameworks maintain that humans should be kept in the loop, making the final calls on at least life-or-death decisions. But even that, says Rowe, doesn’t make AI use more straightforwardly ethical. Humans might have information or moral qualms that bots don’t, but the bots can synthesize more data, so they might understand things humans don’t. Humans also aren’t objective, famously, or may themselves be actively unethical.

AI, too, can have its own biases. Rowe’s paper cites a hypothetical system that uses U.S. data to visually identify “friendly” forces. In this case, it might find that friendlies tend to be ta

Read More

Artificial Intelligence

Lets Govern AI Rather Than Let It Govern Us

Published

on

aihumanrights

A pivotal moment has unfolded at the United Nations General Assembly. For the first time, a resolution was adopted focused on ensuring Artificial Intelligence (AI) systems are “safe, secure and trustworthy”, marking a significant step towards integrating AI with sustainable development globally. This initiative, led by the United States and supported by an impressive cohort of over 120 other Member States, underscores a collective commitment to navigating the AI landscape with the utmost respect for human rights.

But why does this matter to us, the everyday folks? AI isn’t just about robots from sci-fi movies anymore; it’s here, deeply embedded in our daily lives. From the recommendations on what to watch next on Netflix to the virtual assistant in your smartphone, AI’s influence is undeniable. Yet, as much as it simplifies tasks, the rapid evolution of AI also brings forth a myriad of concerns – privacy issues, ethical dilemmas and the very fabric of our job market being reshaped.

The Unanimous Call for Responsible AI Governance

The resolution highlights a crucial understanding: the rights we hold dear offline must also be protected in the digital realm, throughout the lifecycle of AI systems. It’s a call to action for not just countries but companies, civil societies, researchers, and media outlets to develop and support governance frameworks that ensure the safe and trustworthy use of AI. It acknowledges the varying levels of technological development across the globe and stresses the importance of supporting developing countries to close the digital divide and bolster digital literacy.

The United States Ambassador to the UN, Linda Thomas-Greenfield, shed light on the inclusive dialogue that led to this resolution. It’s seen as a blueprint for future discussions on the challenges AI poses, be it in maintaining peace, security, or responsible military use. This resolution isn’t about stifling innovation; rather, it’s about ensuring that as we advance, we do so with humanity, dignity, and a steadfast commitment to human rights at the forefront.

This unprecedented move by the UN General Assembly is not just a diplomatic achievement; it’s a global acknowledgment that while AI has the potential to transform our world for the better, its governance cannot be taken lightly. The resolution, co-sponsored by countries including China, represents a united front in the face of AI’s rapid advancement and its profound implications.

Bridging the Global Digital Divide

As we stand at this crossroads, the message is clear: the journey of AI is one we must steer with care, ensuring it aligns with our shared values and aspirations. The resolution champions a future where AI serves as a force for good, propelling us towards the Sustainable Development Goals, from eradicating poverty to ensuring quality education for all.

aiunitednations

The emphasis on cooperation, especially in aiding developing nations to harness AI, underscores a vision of a world where technological advancement doesn’t widen the gap between nations but brings us closer to achieving global equity. It’s a reminder that in the age of AI, our collective wisdom, empathy, and collaboration are our most valuable assets.

Ambassador Thomas-Greenfield’s remarks resonate with a fundamental truth: the fabric of our future is being woven with threads of artificial intelligence. It’s imperative that we, the global community, hold the loom. The adoption of this resolution is not the end, but a beginning—a stepping stone towards a comprehensive framework where AI enriches lives without compromising our moral compass.

At the heart of this resolution is the conviction that AI, though devoid of consciousness, must operate within the boundaries of our collective human conscience. The call for AI systems that respect human rights isn’t just regulatory rhetoric; it’s an appeal for empathy in algorithms, a plea to encode our digital evolution with the essence of what it means to be human.

This brings to light a pertinent question: How do we ensure that AI’s trajectory remains aligned with human welfare? The resolution’s advocacy for cooperation among nations, especially in supporting developing countries, is pivotal. It acknowledges that the AI divide is not just a matter of technological access but also of ensuring that all nations have a voice in shaping AI’s ethical landscape. By fostering an environment where technology serves humanity universally, we inch closer to a world where AI’s potential is not a privilege but a shared global heritage.

Furthermore, the resolution’s emphasis on bridging the digital divide is a clarion call for inclusivity in the digital age. It’s a recognition that the future we craft with AI should be accessible to all, echoing through classrooms in remote villages and boardrooms in bustling cities alike. The initiative to equip developing nations with AI tools and knowledge is not just an act of technological philanthropy; it’s an investment in a collective future where progress is measured not by the advancements we achieve but by the lives we uplift.

Uniting for a Future Shaped by Human Values

The global consensus on this resolution, with nations like the United States and China leading the charge, signals a watershed moment in international diplomacy. It showcases a rare unity in the quest to harness AI’s potential responsibly, amidst a world often divided by digital disparities. The resolution’s journey, from conception to unanimous adoption, reflects a world waking up to the reality that in the age of AI, our greatest strength lies in our unity.

As we stand at the dawn of this new era, the resolution serves as both a compass and a beacon; a guide to navigate the uncharted waters of AI governance and a light illuminating the path to a future where technology and humanity converge in harmony. The unanimous adoption of this resolution is not just a victory for diplomacy; it’s a promise of hope, a pledge that in the symphony of our future, technology will amplify, not overshadow, the human spirit.

In conclusion, “Let’s Govern AI Rather Than Let It Govern Us” is more than a motto; it’s a mandate for the modern world. It’s a call to action for every one of us to participate in shaping a future where AI tools are wielded with wisdom, wielded to weave a tapestry of progress that reflects our highest aspirations and deepest values.

Read More

Continue Reading

Artificial Intelligence

KASBA.AI Now Available on ChatGPT Store

ChatGPT Store by OpenAI is the new platform for developers to create and share unique AI models with monetization opportunities

Published

on

chatgpt store kasba.ai

OpenAI, the leading Artificial Intelligence research laboratory has taken a significant step forward with the launch of the ChatGPT Store. This new platform allows developers to create and share their unique AI models, expanding the capabilities of the already impressive ChatGPT. Among the exciting additions to the store are Canva, Veed, Alltrails and now KASBA.AI with many more entering every day.

About OpenAI

OpenAI, founded by Elon Musk and Sam Altman, has always been at the forefront of AI research. With a mission to ensure that artificial general intelligence benefits all of humanity, they have consistently pushed the boundaries of what is possible in the field.

OpenAI’s ChatGPT has already changed the way we interact with technology with its ability to generate coherent and contextually relevant responses. Now, with the ChatGPT Store, OpenAI is aiming to empower developers and non technical users to contribute and build upon this powerful platform.

kasba.ai chatgpt store

What is the ChatGPT Store?

The ChatGPT Store is an exciting initiative that allows developers to create, share and in time monetise their unique AI models. It serves as a marketplace for AI models that can be integrated with ChatGPT.

This means that users can now have access to a wide range of specialised conversational AI models, catering to their specific needs. The ChatGPT Store opens up a world of possibilities, making AI more accessible and customisable than ever before.

chatgpt store

Key Features of the ChatGPT Store

Some unique features of the store include customisable AI models, pre trained models for quick integration and the ability for developers to earn money by selling their models.

Developers can also leverage the rich ecosystem of tools and resources provided by OpenAI to enhance their models. This collaborative marketplace fosters innovation and encourages the development of conversational AI that can cater to various industries and use cases.

Impact on Industries and Society

The launch of the ChatGPT Store has far reaching implications for industries and society as a whole. By making AI models more accessible and customisable, businesses can now leverage conversational AI to enhance customer support, automate repetitive tasks and improve overall efficiency.

From healthcare and finance to education and entertainment the impact of AI on various sectors will only grow with the availability of specialised models on the ChatGPT Store. This democratisation of conversational AI technology will undoubtedly pave the way for a more connected and efficient world.

Ethical Considerations

As with any technological advancement, ethical considerations are crucial. OpenAI places a strong emphasis on responsible AI development and encourages developers to adhere to guidelines and principles that prioritize user safety and privacy. The ChatGPT Store ensures that AI models are vetted and reviewed to maintain high standards.

OpenAI is committed to continuously improving the user experience, and user feedback plays a vital role in shaping the future of AI development. For specific concerns regarding AI and data protection visit Data Protection Officer on ChatGPT Store.

dpo chatgpt store

KASBA.AI on ChatGPT Store

One of the most exciting additions to the ChatGPT Store is KASBA.AI, your guide to the latest AI tool reviews, news, AI governance and learning resources. From answering questions to providing recommendations, KASBA.AI hopes to deliver accurate and contextually relevant responses. Its advanced algorithms and state of the art natural language processing make it a valuable asset to anyone looking for AI productivity tools in the marketplace.

Takeaway

OpenAI’s ChatGPT Store represents an exciting leap forward in the world of conversational AI. With customisable models, the ChatGPT Store empowers developers to create AI that caters to specific needs, with the potential to propel industries and society to new horizons..

OpenAI’s commitment to responsible AI development should ensure that user safety and privacy are prioritised; lets keep an eye here! Meanwhile as we traverse this new era of conversational AI, ChatGPT Store will undoubtedly shape the future of how we interact with technology in time to come with potentially infinite possibilities.

Continue Reading

Artificial Intelligence

Two AIs Get Chatty: A Big Leap at UNIGE

Published

on

two ais get chatty

Chatting AIs: How It All Started

Scientists at the University of Geneva (UNIGE) have done something super cool. They’ve made an AI that can learn stuff just by hearing it and then can pass on what it’s learned to another AI. It’s like teaching your friend how to do something just by talking to them. This is a big deal because it’s kind of like how we, humans, learn and share stuff with each other, but now machines are doing it too!

Two AIs Get Chatty By Taking Cues from Our Brains

This whole idea came from looking at how our brains work. Our brains have these things called neurons that talk to each other with electrical signals, and that’s how we learn and remember things. The UNIGE team made something similar for computers, called artificial neural networks. These networks help computers understand and use human language, which is pretty awesome.

How Do AIs Talk to Each Other?

For a long time, getting computers to learn new things just from words and then teach those things to other computers was super hard. It’s easy for us humans to learn something new, figure it out, and then explain it to someone else. But for computers? Not so much. That’s why what the UNIGE team did is such a big step forward. They’ve made it possible for one AI to learn a task and then explain it to another AI, all through chatting.

two ais get chatty cool

Learning Like Us

The secret here is called Natural Language Processing (NLP). NLP is all about helping computers understand human talk or text. This is what lets AIs get what we’re saying and then do something with that info. The UNIGE team used NLP to teach their AI how to understand instructions and then act on them. But the real magic is that after learning something new, this AI can turn around and teach it to another AI, just like how you might teach your friend a new trick.

Breaking New Ground in AI Learning

The UNIGE team didn’t just stop at making an AI that learns from chatting. They took it a step further. After one AI learns a task, it can explain how to do that task to another AI. Imagine you figured out how to solve a puzzle and then told your friend how to do it too. That’s what these AIs are doing, but until now, this was super hard to achieve with machines.

From Learning to Teaching

The team started with a really smart AI that already knew a lot about language. They hooked it up to a simpler AI, kind of like giving it a buddy to chat with. First, they taught the AI to understand language, like training it to know what we mean when we talk. Then, they moved on to getting the AI to do stuff based on what it learned from words alone. But here’s the kicker: after learning something new, this AI could explain it to its buddy AI in a way that the second one could get it and do the task too.

A Simple Task, A Complex Achievement

The tasks themselves might seem simple, like identifying which side a light was flashing on. But it’s not just about the task; it’s about understanding and teaching it, which is huge for AI. This was the first time two AIs communicated purely through language to share knowledge. It’s like if one robot could teach another robot how to dance just by talking about it. Pretty amazing, right?

Why This Matters

This is a big deal for the future. It’s not just about AIs chatting for fun; it’s about what this means for robots and technology down the line. Imagine robots that can learn tasks just by listening to us and then teach other robots how to do those tasks. It could change how we use robots in homes, hospitals, or even in space. Instead of programming every single step, we could just tell them what we need, and they’d figure it out and help each other out too. It’s like having a team of robots that learn from each other and us, making them way more useful and flexible.

The UNIGE team is already thinking about what’s next. Their AI network is still pretty small, but they believe they can make it bigger and better. We’re talking about robots that not only understand and learn from us but also from each other. This could lead to robots that are more like partners, helping solve problems, invent new things, and maybe even explore the universe with us.

What’s the Future?

This adventure isn’t just about what’s happening now. It’s about opening the door to a future where robots really get us, and each other. The UNIGE team’s work is super exciting for anyone who’s into robots. It’s all about making it possible for machines to have chats with each other, which is a big deal for making smarter, more helpful robots.

The brains behind this project say they’ve just started. They’ve got a small network of AI brains talking, but they’re dreaming big. They’re thinking about making even bigger and smarter networks. Imagine humanoid robots that don’t just understand what you’re telling them but can also share secrets with each other in their own robot language. The researchers are pretty stoked because there’s nothing stopping them from turning this dream into reality.

So, we’re looking at a future where robots could be our buddies, understanding not just what we say but also how we say it. They could help out more around the house, be there for us when we need someone to talk to, or even work alongside us, learning new things and teaching each other without us having to spell it all out. It’s like having a robot friend who’s always there to learn, help, and maybe even make us laugh.

Wrap up

What started as a project at UNIGE could end up changing how we all live, work, and explore. It’s a glimpse into a future where AIs and robots are more than just tools; they’re part of our team, learning and growing with us. Who knows what amazing things they’ll teach us in return?

Read More

Continue Reading

Latest Reviews

chatgpt chatgpt
ChatGPT2 weeks ago

ChatGPT vs Gemini vs Copilot

Introduction When it comes to writing, researching, or coding there’s no shortage of online tools promising to make your life...

ai tools ai tools
Reviews2 weeks ago

Top 10 AI Voice Generators Entering 2024

We have entered a new era of digital narration gone are the days of robotic and monotonous speech

ai tools ai tools
AI Tools2 weeks ago

Prisma Transforms Photos with AI Powered Artistic Filters

Overview Prisma Labs is a leading technology company specialised in artificial intelligence and computer vision. With their flagship product, the...

ai tools ai tools
AI Tools2 weeks ago

Tickeron AI Powered Trading Simplifying Investment Decisions

AI powered stock trading tool that simplifies decision making with real time market insights technical analysis and trading signals

Synthesia Synthesia
AI Tools2 weeks ago

Synthesia: Great #1 Contender in AI Text to Video Generation?

Overview Synthesia is a powerful Artificial Intelligence voice and video generator tool that allows users to create hyper-realistic videos with...

canva canva
AI Tools3 weeks ago

Canva: The Ultimate AI Graphic Design Tool for Stunning Visuals

Overview Canva is a renowned AI powered graphic design tool that allows users to create stunning visuals and graphics effortlessly....

ai tools ai tools
AI Tools3 weeks ago

Quantconnect: AI Powered Trading Tool for Automated Strategies

An AI powered trading tool offering strategy automation and extensive features

storychief storychief
AI Tools3 weeks ago

StoryChief Eliminates 5 Tools in Content Creation

Looking to streamline your content creation and distribution efforts? Discover StoryChief, the all in one platform for bloggers and content...

kasba.ai grammerly kasba.ai grammerly
AI Tools1 month ago

Grammarly: An Essential Writing Tool for Students, Teachers and Writers

Improve your writing check for grammar correct errors enhance vocabulary and detect plagiarism

ai tools ai tools
AI Tools2 months ago

Explore Artbreeder, the Innovative AI Image Creator

Discover the innovative AI application, Artbreeder, that combines existing artworks to create unique images. Learn about its features and benefits...

Trending

Disclosure: KASBA.AI participates in various affiliate programs which means we may earn a fee on purchases and product links to our partner websites. Copyright © 2024 KASBA.AI