Connect with us

Artificial Intelligence

How Ego and Fear Fuelled the Rise of Artificial Intelligence

The question of whether AI will elevate the world or destroy it or at least inflict grave damage is an ongoing debate



How ego and fear fuelled the rise of artificial intelligence
By Cade Metz, Karen Weise, Nico Grant and Mike Isaac

SAN FRANCISCO — Elon Musk celebrated his 44th birthday in July 2015 at a three-day party thrown by his wife at a California wine country resort dotted with cabins. It was family and friends only, with children racing around the upscale property in Napa Valley.

This was years before Twitter became X, and Tesla had a profitable year. Musk and his wife, Talulah Riley – an actress who played a beautiful but dangerous robot on HBO’s science fiction series Westworld – were a year from throwing in the towel on their second marriage. Larry Page, a party guest, was still CEO of Google. And artificial intelligence had pierced the public consciousness only a few years before, when it was used to identify cats on YouTube – with 16 per cent accuracy.

Larry Page and Elon Musk ended their friendship over their opposing beliefs about the risks of artificial intelligence.
Larry Page and Elon Musk ended their friendship over their opposing beliefs about the risks of artificial intelligence.Credit: Hokyoung Kim/NYT

AI was the big topic of conversation when Musk and Page sat down near a fire pit beside a swimming pool after dinner the first night. The two billionaires had been friends for more than a decade, and Musk sometimes joked that he occasionally crashed on Page’s sofa after a night playing video games.

But the tone that clear night soon turned contentious as the two debated whether AI would ultimately elevate humanity or destroy it.

As the discussion stretched into the chilly hours, it grew intense, and some of the more than 30 partyers gathered closer to listen. Page, hampered for more than a decade by an unusual ailment in his vocal cords, described his vision of a digital utopia in a whisper. Humans would eventually merge with artificially intelligent machines, he said. One day, there would be many kinds of intelligence competing for resources, and the best would win.

If that happens, Musk said, we’re doomed. The machines will destroy humanity.

With a rasp of frustration, Page insisted his utopia should be pursued. Finally, he called Musk a “specieist,” a person who favours humans over the digital life-forms of the future.

That insult, Musk said later, was “the last straw”.

Many in the crowd seemed gobsmacked, if amused, as they dispersed for the night and considered it just another one of those esoteric debates that often break out at Silicon Valley parties.

But eight years later, the argument between the two men seems prescient. The question of whether AI will elevate the world or destroy it – or at least inflict grave damage – has framed an ongoing debate among Silicon Valley founders, chatbot users, academics, legislators and regulators about whether the technology should be controlled or set free.


That debate has pitted some of the world’s richest men against one another: Musk, Page, Mark Zuckerberg of Meta, tech investor Peter Thiel, Satya Nadella of Microsoft and Sam Altman of OpenAI. All have fought for a piece of the business – which one day could be worth trillions of dollars – and the power to shape it.

At the heart of this competition is a brain-stretching paradox. The people who say they are most worried about AI are among the most determined to create it and enjoy its riches. They have justified their ambition with their strong belief that they alone can keep AI from endangering Earth.

Musk and Page stopped speaking soon after the party that summer. A few weeks later, Musk dined with Altman, who was then running a tech incubator, and several researchers in a private room at the Rosewood hotel in Menlo Park, California, a favoured deal-making spot close to the venture capital offices of Sand Hill Road.

That dinner led to the creation of a startup called OpenAI later in the year. Backed by hundreds of millions of dollars from Musk and other funders, the lab promised to protect the world from Page’s vision.

Thanks to its ChatGPT chatbot, OpenAI has fundamentally changed the technology industry and has introduced the world to the risks and potential of artificial intelligence. OpenAI is valued at more than $US80 billion ($A123 billion), according to two people familiar with the company’s latest funding round, although Musk and Altman’s partnership didn’t make it. The two have since stopped speaking.

Elon Musk and Sam Altman.
Elon Musk and Sam Altman.Credit: Bloomberg, AP

“There is disagreement, mistrust, egos,” Altman said. “The closer people are to being pointed in the same direction, the more contentious the disagreements are. You see this in sects and religious orders. There are bitter fights between the closest people.”

Last month that infighting came to OpenAI’s boardroom. Rebel board members tried to force out Altman because, they believed, they could no longer trust him to build AI that would benefit humanity. Over five chaotic days, OpenAI looked as if it were going to fall apart, until the board – pressured by giant investors and employees who threatened to follow Altman out the door – backed down.

The drama inside OpenAI gave the world its first glimpse of the bitter feuds among those who will determine the future of AI.

But years before OpenAI’s near meltdown, there was a little-publicised but ferocious competition in Silicon Valley for control of the technology that is now quickly reshaping the world, from how children are taught to how wars are fought.

The birth of DeepMind

Five years before the Napa Valley party and two before the cat breakthrough on YouTube, Demis Hassabis, a 34-year-old neuroscientist, walked into a cocktail party at Thiel’s San Francisco town house and realised he had hit pay dirt. There in Thiel’s living room, overlooking the city’s Palace of Fine Arts and a swan pond, was a chessboard. Hassabis had once been the second-best player in the world in the under-14 category.

“I was preparing for that meeting for a year,” Hassabis said. “I thought that would be my unique hook in: I knew that he loved chess.”

In 2010, Hassabis and two colleagues, who all lived in Britain, were looking for money to start building “artificial general intelligence,” or AGI, a machine that could do anything the brain could do. At the time, few people were interested in AI. After a half-century of research, the AI field had failed to deliver anything remotely close to the human brain.

Demis Hassabis at the UK’s Artificial Intelligence Safety Summit, at Bletchley Park last month.
Demis Hassabis at the UK’s Artificial Intelligence Safety Summit, at Bletchley Park last month.Credit: Reuters Pool

Still, some scientists and thinkers had become fixated on the downsides of AI. Many, including the three young men from Britain, had a connection to Eliezer Yudkowsky, an internet philosopher and self-taught AI researcher. Yudkowsky was a leader in a community of people who called themselves Rationalists or, in later years, effective altruists.

They believed that AI could find a cure for cancer or solve climate change, but they worried that AI bots might do things their creators had not intended. If the machines became more intelligent than humans, the Rationalists argued, the machines could turn on their creators.

Thiel had become enormously wealthy through an early investment in Facebook and through his work with Musk in the early days of PayPal. He had developed a fascination with the singularity, a trope of science fiction that describes the moment when intelligent technology can no longer be controlled by humanity.

With funding from Thiel, Yudkowsky had expanded his AI lab and created an annual conference on the singularity. Years before, one of Hassabis’ two colleagues had met Yudkowsky, and he snagged them speaking spots at the conference, ensuring they’d be invited to Thiel’s party.

Yudkowsky introduced Hassabis to Thiel.

Hassabis assumed that lots of people at the party would be trying to squeeze their host for money. His strategy was to arrange another meeting. There was a deep tension between the bishop and the knight, he told Thiel. The two pieces carried the same value, but the best players understood that their strengths were vastly different.

It worked.

Charmed, Thiel invited the group back the next day, where they gathered in the kitchen. Their host had just finished his morning workout and was still sweating in a shiny tracksuit. A butler handed him a Diet Coke. The three made their pitch, and soon Thiel and his venture capital firm agreed to put £1.4 million ($A2.7 million) into their startup. He was their first major investor.

They named their company DeepMind, a nod to “deep learning,” a way for AI systems to learn skills by analysing large amounts of data; to neuroscience; and to the Deep Thought supercomputer from the sci-fi novel The Hitchhiker’s Guide to the Galaxy. By the spring of 2010, they were building their dream machine. They wholeheartedly believed that because they understood the risks, they were uniquely positioned to protect the world.

“I don’t see this as a contradictory position,” said Mustafa Suleyman, one of the three DeepMind founders. “There are huge benefits to come from these technologies. The goal is not to eliminate them or pause their development. The goal is to mitigate the downsides.”


Having won over Thiel, Hassabis worked his way into Musk’s orbit. About two years later, they met at a conference organised by Thiel’s investment fund, which had also put money into Musk’s company SpaceX. Hassabis secured a tour of SpaceX headquarters. Afterward, with rocket hulls hanging from the ceiling, the two men lunched in the cafeteria and talked. Musk explained that his plan was to colonise Mars to escape overpopulation and other dangers on Earth. Hassabis replied that the plan would work – so long as superintelligent machines didn’t follow and destroy humanity on Mars, too.

Musk was speechless. He hadn’t thought about that particular danger. Musk soon invested in DeepMind alongside Thiel, so he could be closer to the creation of this technology.

Flush with cash, DeepMind hired researchers who specialised in neural networks, complex algorithms created in the image of the human brain. A neural network is essentially a giant mathematical system that spends days, weeks or even months identifying patterns in large amounts of digital data. First developed in the 1950s, these systems could learn to handle tasks on their own. After analysing names and addresses scribbled on hundreds of envelopes, for instance, they could read handwritten text.

DeepMind took the concept further. It built a system that could learn to play classic Atari games such as Space Invaders, Pong and Breakout to illustrate what was possible.

This got the attention of another Silicon Valley powerhouse, Google, and specifically Page. He saw a demonstration of DeepMind’s machine playing Atari games. He wanted in.

The talent auction

In the spring of 2012, Geoffrey Hinton, a 64-year-old professor at the University of Toronto, and two graduate students published a research paper that showed the world what AI could do. They trained a neural network to recognise common objects such as flowers, dogs and cars.

Scientists were surprised by the accuracy of the technology built by Hinton and his students. One who took particular notice was Yu Kai, an AI researcher who had met Hinton at a research conference and had recently started working for Baidu, a giant Chinese internet company. Baidu offered Hinton and his students $US12 million to join the company in Beijing, according to three people familiar with the offer.

Hinton turned Baidu down, but the money got his attention.

The Cambridge-educated British expatriate had spent most of his career in academia, except for occasional stints at Microsoft and Google, and was not especially driven by money. But he had a neurodivergent child, and the money would mean financial security.

“We did not know how much we were worth,” Hinton said. He consulted lawyers and experts on acquisitions and came up with a plan: “We would organise an auction, and we would sell ourselves.” The auction would take place during an annual AI conference at the Harrah’s hotel and casino on Lake Tahoe.

Big Tech took notice.

Google, Microsoft, Baidu and other companies were beginning to believe that neural networks were a path to machines that could not only see but also hear, write, talk and — eventually — think.


Page had seen similar technology at Google Brain, his company’s AI lab, and he thought Hinton’s research could elevate his scientists’ work. He gave Alan Eustace, Google’s senior vice president of engineering, what amounted to a blank check to hire any AI expertise he needed.

Eustace and Jeff Dean, who led the Brain lab, flew to Lake Tahoe and took Hinton and his students out to dinner at a steakhouse inside the hotel the night before the auction. The smell of old cigarettes was overpowering, Dean recalled. They made the case for coming to work at Google.

The next day, Hinton ran the auction from his hotel room. Because of an old back injury, he rarely sat down. He turned a trash can upside down on a table, put his laptop on top and watched the bids roll in over the next two days.

Google made an offer. So did Microsoft. DeepMind quickly bowed out as the price went up. The industry giants pushed the bids to $US20 million and then $US25 million, according to documents detailing the auction. As the price passed $US30 million, Microsoft quit, but it rejoined the bidding at $US37 million.

“We felt like we were in a movie,” Hinton said.

Then Microsoft dropped out a second time. Only Baidu and Google were left, and they pushed the bidding to $US42 million, $US43 million. Finally, at $US44 million, Hinton and his students stopped the auction. The bids were still climbing, but they wanted to work for Google. And the money was staggering.

It was an unmistakable sign that deep-pocketed companies were determined to buy the most talented AI researchers, which was not lost on Hassabis at DeepMind. He had always told his employees that DeepMind would remain an independent company. That was, he believed, the best way to ensure its technology didn’t turn into something dangerous. But as Big Tech entered the talent race, he decided he had no choice: It was time to sell.

By the end of 2012, Google and Facebook were angling to acquire the London lab, according to three people familiar with the matter. Hassabis and his co-founders insisted on two conditions: No DeepMind technology could be used for military purposes, and its AGI technology must be overseen by an independent board of technologists and ethicists.

Google offered $US650 million. Zuckerberg of Facebook offered a bigger payout to DeepMind’s founders but would not agree to the conditions. DeepMind sold to Google.

Zuckerberg was determined to build an AI lab of his own.

He hired Yann LeCun, a French computer scientist who had also done pioneering AI research, to run it. A year after Hinton’s auction, Zuckerberg and LeCun flew to Lake Tahoe for the same AI conference. While padding around a suite at the Harrah’s casino in his socks, Zuckerberg personally interviewed top researchers, who were soon offered millions of dollars in salary and stock.

AI was once laughed off. Now the richest men in Silicon Valley were shelling out billions to keep from being left behind.

The lost ethics board

When Musk invested in DeepMind, he broke his own informal rule – that he would not invest in any company he didn’t run himself. The downsides of his decision were already apparent when, only a month or so after his birthday spat with Page, he again found himself face to face with his former friend and fellow billionaire.

Larry Page, co-founder of Google.
Larry Page, co-founder of Google.Credit: Bloomberg

The occasion was the first meeting of DeepMind’s ethics board, on August 14, 2015. The board had been set up at the insistence of the startup’s founders to ensure that their technology did no harm after the sale. The members convened in a conference room just outside Musk’s office at SpaceX.

But that’s where Musk’s control ended. When Google bought DeepMind, it bought the whole thing. Musk was out. Financially, he had come out ahead, but he was unhappy.

Three Google executives now firmly in control of DeepMind were there: Page; Sergey Brin, a Google co-founder and Tesla investor; and Eric Schmidt, Google’s chair. Among the other attendees were Reid Hoffman, another PayPal founder; and Toby Ord, an Australian philosopher studying “existential risk”.

The DeepMind founders reported that they were pushing ahead with their work but that they were aware the technology carried serious risks.

Suleyman, the DeepMind co-founder, gave a presentation called “The Pitchforkers Are Coming.” AI could lead to an explosion in disinformation, he told the board. He fretted that as the technology replaced countless jobs in the coming years, the public would accuse Google of stealing their livelihoods. Google would need to share its wealth with the millions who could no longer find work and provide a “universal basic income,” he argued.

Musk agreed. But it was pretty clear that his Google guests were not prepared to embark on a redistribution of (their) wealth. Schmidt said he thought the worries were completely overblown. In his usual whisper, Page agreed. AI would create more jobs than it took away, he argued.

Eight months later, DeepMind had a breakthrough that stunned the AI community and the world. A DeepMind machine called AlphaGo beat one of the world’s best players at the ancient game of Go. The game, streamed over the internet, was watched by 200 million people across the globe. Most researchers had assumed that AI needed another 10 years to muster the ingenuity to do that.

Demis Hassabis, right, co-founder of  DeepMind, with South Korean professional Lee Se-dol. Lee played Go against the DeepMind machine AlphaGo in March 2016.
Demis Hassabis, right, co-founder of DeepMind, with South Korean professional Lee Se-dol. Lee played Go against the DeepMind machine AlphaGo in March 2016.Credit: EPA

Rationalists, effective altruists and others who worried about the risks of AI claimed the computer’s win validated their fears.

“This is another indication that AI is progressing faster than even many experts anticipated,” Victoria Krakovna, who would soon join DeepMind as an “AI safety” researcher, wrote in a blog post.

DeepMind’s founders were increasingly worried about what Google would do with their inventions. In 2017, they tried to break away from the company. Google responded by increasing the salaries and stock award packages of the DeepMind founders and their staff. They stayed put.

The ethics board never had a second meeting.

The Breakup

Convinced that Page’s optimistic view of AI was dead wrong, and angry at his loss of DeepMind, Musk built his own lab.

OpenAI was founded in late 2015, just a few months after he met with Altman at the Rosewood hotel in Silicon Valley.

Sam Altman
Sam AltmanCredit: Bloomberg Businessweek

Musk pumped money into the lab, and his former PayPal buddies – Hoffman and Thiel – came along for the ride. The three men and others pledged to put $US1 billion into the project, which Altman, who was 30 at the time, would help run. To get them started, they poached Ilya Sutskever from Google. (Sutskever was one of the graduate students Google “bought” in Hinton’s auction.)

Initially, Musk wanted to operate OpenAI as a nonprofit, free from the economic incentives that were driving Google and other corporations. But by the time Google wowed the tech community with its Go stunt, Musk was changing his mind about how it should be run. He desperately wanted OpenAI to invent something that would capture the world’s imagination and close the gap with Google, but it wasn’t getting the job done as a nonprofit.

In late 2017, Musk hatched a plan to wrest control of the lab from Altman and the other founders and transform it into a commercial operation that would join forces with Tesla and rely on supercomputers the car company was developing, according to four people familiar with the matter. When Altman and others pushed back, Musk quit and said he would focus on his own AI work at Tesla.

In February 2018, he announced his departure to OpenAI’s staff on the top floor of the startup’s offices in a converted truck factory, three people who attended the meeting said. When he said that OpenAI needed to move faster, one researcher retorted at the meeting that Musk was being reckless. Musk called the researcher a “jackass” and stormed out, taking his deep pockets with him.

OpenAI suddenly needed new financing in a hurry. Altman flew to Sun Valley for a conference and ran into Satya Nadella, Microsoft’s CEO. A tie-up seemed natural. Altman knew Microsoft’s chief technology officer, Kevin Scott. Microsoft had bought LinkedIn from Hoffman, an OpenAI board member. Nadella told Scott to get it done. The deal closed in 2019.

Altman and OpenAI had formed a for-profit company under the original nonprofit, they had $US1 billion in fresh capital, and Microsoft had a new way to build AI into its vast cloud computing service.

Not everyone inside OpenAI was happy.

Dario Amodei, a researcher with ties to the effective altruist community, had been on hand at the Rosewood hotel when OpenAI was born. Amodei, who endlessly twisted his curls between his fingers as he talked, was leading the lab’s efforts to build a neural network called a large language model that could learn from enormous amounts of digital text.

Seeking the path to artificial general intelligence, AGI.
Seeking the path to artificial general intelligence, AGI.Credit: iStock

Read More

Artificial Intelligence

Two AIs Get Chatty: A Big Leap at UNIGE



two ais get chatty

Chatting AIs: How It All Started

Scientists at the University of Geneva (UNIGE) have done something super cool. They’ve made an AI that can learn stuff just by hearing it and then can pass on what it’s learned to another AI. It’s like teaching your friend how to do something just by talking to them. This is a big deal because it’s kind of like how we, humans, learn and share stuff with each other, but now machines are doing it too!

Two AIs Get Chatty By Taking Cues from Our Brains

This whole idea came from looking at how our brains work. Our brains have these things called neurons that talk to each other with electrical signals, and that’s how we learn and remember things. The UNIGE team made something similar for computers, called artificial neural networks. These networks help computers understand and use human language, which is pretty awesome.

How Do AIs Talk to Each Other?

For a long time, getting computers to learn new things just from words and then teach those things to other computers was super hard. It’s easy for us humans to learn something new, figure it out, and then explain it to someone else. But for computers? Not so much. That’s why what the UNIGE team did is such a big step forward. They’ve made it possible for one AI to learn a task and then explain it to another AI, all through chatting.

two ais get chatty cool

Learning Like Us

The secret here is called Natural Language Processing (NLP). NLP is all about helping computers understand human talk or text. This is what lets AIs get what we’re saying and then do something with that info. The UNIGE team used NLP to teach their AI how to understand instructions and then act on them. But the real magic is that after learning something new, this AI can turn around and teach it to another AI, just like how you might teach your friend a new trick.

Breaking New Ground in AI Learning

The UNIGE team didn’t just stop at making an AI that learns from chatting. They took it a step further. After one AI learns a task, it can explain how to do that task to another AI. Imagine you figured out how to solve a puzzle and then told your friend how to do it too. That’s what these AIs are doing, but until now, this was super hard to achieve with machines.

From Learning to Teaching

The team started with a really smart AI that already knew a lot about language. They hooked it up to a simpler AI, kind of like giving it a buddy to chat with. First, they taught the AI to understand language, like training it to know what we mean when we talk. Then, they moved on to getting the AI to do stuff based on what it learned from words alone. But here’s the kicker: after learning something new, this AI could explain it to its buddy AI in a way that the second one could get it and do the task too.

A Simple Task, A Complex Achievement

The tasks themselves might seem simple, like identifying which side a light was flashing on. But it’s not just about the task; it’s about understanding and teaching it, which is huge for AI. This was the first time two AIs communicated purely through language to share knowledge. It’s like if one robot could teach another robot how to dance just by talking about it. Pretty amazing, right?

Why This Matters

This is a big deal for the future. It’s not just about AIs chatting for fun; it’s about what this means for robots and technology down the line. Imagine robots that can learn tasks just by listening to us and then teach other robots how to do those tasks. It could change how we use robots in homes, hospitals, or even in space. Instead of programming every single step, we could just tell them what we need, and they’d figure it out and help each other out too. It’s like having a team of robots that learn from each other and us, making them way more useful and flexible.

The UNIGE team is already thinking about what’s next. Their AI network is still pretty small, but they believe they can make it bigger and better. We’re talking about robots that not only understand and learn from us but also from each other. This could lead to robots that are more like partners, helping solve problems, invent new things, and maybe even explore the universe with us.

What’s the Future?

This adventure isn’t just about what’s happening now. It’s about opening the door to a future where robots really get us, and each other. The UNIGE team’s work is super exciting for anyone who’s into robots. It’s all about making it possible for machines to have chats with each other, which is a big deal for making smarter, more helpful robots.

The brains behind this project say they’ve just started. They’ve got a small network of AI brains talking, but they’re dreaming big. They’re thinking about making even bigger and smarter networks. Imagine humanoid robots that don’t just understand what you’re telling them but can also share secrets with each other in their own robot language. The researchers are pretty stoked because there’s nothing stopping them from turning this dream into reality.

So, we’re looking at a future where robots could be our buddies, understanding not just what we say but also how we say it. They could help out more around the house, be there for us when we need someone to talk to, or even work alongside us, learning new things and teaching each other without us having to spell it all out. It’s like having a robot friend who’s always there to learn, help, and maybe even make us laugh.

Wrap up

What started as a project at UNIGE could end up changing how we all live, work, and explore. It’s a glimpse into a future where AIs and robots are more than just tools; they’re part of our team, learning and growing with us. Who knows what amazing things they’ll teach us in return?

Read More

Continue Reading

Artificial Intelligence

KASBA.AI Now Available on ChatGPT Store

ChatGPT Store by OpenAI is the new platform for developers to create and share unique AI models with monetization opportunities



chatgpt store

OpenAI, the leading Artificial Intelligence research laboratory has taken a significant step forward with the launch of the ChatGPT Store. This new platform allows developers to create and share their unique AI models, expanding the capabilities of the already impressive ChatGPT. Among the exciting additions to the store are Canva, Veed, Alltrails and now KASBA.AI with many more entering every day.

About OpenAI

OpenAI, founded by Elon Musk and Sam Altman, has always been at the forefront of AI research. With a mission to ensure that artificial general intelligence benefits all of humanity, they have consistently pushed the boundaries of what is possible in the field.

OpenAI’s ChatGPT has already changed the way we interact with technology with its ability to generate coherent and contextually relevant responses. Now, with the ChatGPT Store, OpenAI is aiming to empower developers and non technical users to contribute and build upon this powerful platform. chatgpt store

What is the ChatGPT Store?

The ChatGPT Store is an exciting initiative that allows developers to create, share and in time monetise their unique AI models. It serves as a marketplace for AI models that can be integrated with ChatGPT.

This means that users can now have access to a wide range of specialised conversational AI models, catering to their specific needs. The ChatGPT Store opens up a world of possibilities, making AI more accessible and customisable than ever before.

chatgpt store

Key Features of the ChatGPT Store

Some unique features of the store include customisable AI models, pre trained models for quick integration and the ability for developers to earn money by selling their models.

Developers can also leverage the rich ecosystem of tools and resources provided by OpenAI to enhance their models. This collaborative marketplace fosters innovation and encourages the development of conversational AI that can cater to various industries and use cases.

Impact on Industries and Society

The launch of the ChatGPT Store has far reaching implications for industries and society as a whole. By making AI models more accessible and customisable, businesses can now leverage conversational AI to enhance customer support, automate repetitive tasks and improve overall efficiency.

From healthcare and finance to education and entertainment the impact of AI on various sectors will only grow with the availability of specialised models on the ChatGPT Store. This democratisation of conversational AI technology will undoubtedly pave the way for a more connected and efficient world.

Ethical Considerations

As with any technological advancement, ethical considerations are crucial. OpenAI places a strong emphasis on responsible AI development and encourages developers to adhere to guidelines and principles that prioritize user safety and privacy. The ChatGPT Store ensures that AI models are vetted and reviewed to maintain high standards.

OpenAI is committed to continuously improving the user experience, and user feedback plays a vital role in shaping the future of AI development. For specific concerns regarding AI and data protection visit Data Protection Officer on ChatGPT Store.

dpo chatgpt store

KASBA.AI on ChatGPT Store

One of the most exciting additions to the ChatGPT Store is KASBA.AI, your guide to the latest AI tool reviews, news, AI governance and learning resources. From answering questions to providing recommendations, KASBA.AI hopes to deliver accurate and contextually relevant responses. Its advanced algorithms and state of the art natural language processing make it a valuable asset to anyone looking for AI productivity tools in the marketplace.


OpenAI’s ChatGPT Store represents an exciting leap forward in the world of conversational AI. With customisable models, the ChatGPT Store empowers developers to create AI that caters to specific needs, with the potential to propel industries and society to new horizons..

OpenAI’s commitment to responsible AI development should ensure that user safety and privacy are prioritised; lets keep an eye here! Meanwhile as we traverse this new era of conversational AI, ChatGPT Store will undoubtedly shape the future of how we interact with technology in time to come with potentially infinite possibilities.

Continue Reading

Artificial Intelligence

Lets Govern AI Rather Than Let It Govern Us




A pivotal moment has unfolded at the United Nations General Assembly. For the first time, a resolution was adopted focused on ensuring Artificial Intelligence (AI) systems are “safe, secure and trustworthy”, marking a significant step towards integrating AI with sustainable development globally. This initiative, led by the United States and supported by an impressive cohort of over 120 other Member States, underscores a collective commitment to navigating the AI landscape with the utmost respect for human rights.

But why does this matter to us, the everyday folks? AI isn’t just about robots from sci-fi movies anymore; it’s here, deeply embedded in our daily lives. From the recommendations on what to watch next on Netflix to the virtual assistant in your smartphone, AI’s influence is undeniable. Yet, as much as it simplifies tasks, the rapid evolution of AI also brings forth a myriad of concerns – privacy issues, ethical dilemmas and the very fabric of our job market being reshaped.

The Unanimous Call for Responsible AI Governance

The resolution highlights a crucial understanding: the rights we hold dear offline must also be protected in the digital realm, throughout the lifecycle of AI systems. It’s a call to action for not just countries but companies, civil societies, researchers, and media outlets to develop and support governance frameworks that ensure the safe and trustworthy use of AI. It acknowledges the varying levels of technological development across the globe and stresses the importance of supporting developing countries to close the digital divide and bolster digital literacy.

The United States Ambassador to the UN, Linda Thomas-Greenfield, shed light on the inclusive dialogue that led to this resolution. It’s seen as a blueprint for future discussions on the challenges AI poses, be it in maintaining peace, security, or responsible military use. This resolution isn’t about stifling innovation; rather, it’s about ensuring that as we advance, we do so with humanity, dignity, and a steadfast commitment to human rights at the forefront.

This unprecedented move by the UN General Assembly is not just a diplomatic achievement; it’s a global acknowledgment that while AI has the potential to transform our world for the better, its governance cannot be taken lightly. The resolution, co-sponsored by countries including China, represents a united front in the face of AI’s rapid advancement and its profound implications.

Bridging the Global Digital Divide

As we stand at this crossroads, the message is clear: the journey of AI is one we must steer with care, ensuring it aligns with our shared values and aspirations. The resolution champions a future where AI serves as a force for good, propelling us towards the Sustainable Development Goals, from eradicating poverty to ensuring quality education for all.


The emphasis on cooperation, especially in aiding developing nations to harness AI, underscores a vision of a world where technological advancement doesn’t widen the gap between nations but brings us closer to achieving global equity. It’s a reminder that in the age of AI, our collective wisdom, empathy, and collaboration are our most valuable assets.

Ambassador Thomas-Greenfield’s remarks resonate with a fundamental truth: the fabric of our future is being woven with threads of artificial intelligence. It’s imperative that we, the global community, hold the loom. The adoption of this resolution is not the end, but a beginning—a stepping stone towards a comprehensive framework where AI enriches lives without compromising our moral compass.

At the heart of this resolution is the conviction that AI, though devoid of consciousness, must operate within the boundaries of our collective human conscience. The call for AI systems that respect human rights isn’t just regulatory rhetoric; it’s an appeal for empathy in algorithms, a plea to encode our digital evolution with the essence of what it means to be human.

This brings to light a pertinent question: How do we ensure that AI’s trajectory remains aligned with human welfare? The resolution’s advocacy for cooperation among nations, especially in supporting developing countries, is pivotal. It acknowledges that the AI divide is not just a matter of technological access but also of ensuring that all nations have a voice in shaping AI’s ethical landscape. By fostering an environment where technology serves humanity universally, we inch closer to a world where AI’s potential is not a privilege but a shared global heritage.

Furthermore, the resolution’s emphasis on bridging the digital divide is a clarion call for inclusivity in the digital age. It’s a recognition that the future we craft with AI should be accessible to all, echoing through classrooms in remote villages and boardrooms in bustling cities alike. The initiative to equip developing nations with AI tools and knowledge is not just an act of technological philanthropy; it’s an investment in a collective future where progress is measured not by the advancements we achieve but by the lives we uplift.

Uniting for a Future Shaped by Human Values

The global consensus on this resolution, with nations like the United States and China leading the charge, signals a watershed moment in international diplomacy. It showcases a rare unity in the quest to harness AI’s potential responsibly, amidst a world often divided by digital disparities. The resolution’s journey, from conception to unanimous adoption, reflects a world waking up to the reality that in the age of AI, our greatest strength lies in our unity.

As we stand at the dawn of this new era, the resolution serves as both a compass and a beacon; a guide to navigate the uncharted waters of AI governance and a light illuminating the path to a future where technology and humanity converge in harmony. The unanimous adoption of this resolution is not just a victory for diplomacy; it’s a promise of hope, a pledge that in the symphony of our future, technology will amplify, not overshadow, the human spirit.

In conclusion, “Let’s Govern AI Rather Than Let It Govern Us” is more than a motto; it’s a mandate for the modern world. It’s a call to action for every one of us to participate in shaping a future where AI tools are wielded with wisdom, wielded to weave a tapestry of progress that reflects our highest aspirations and deepest values.

Read More

Continue Reading

Latest Reviews

ai tools ai tools
Reviews3 days ago

Top 10 AI Voice Generators Entering 2024

We have entered a new era of digital narration gone are the days of robotic and monotonous speech

chatgpt chatgpt
ChatGPT4 days ago

ChatGPT vs Gemini vs Copilot

Introduction When it comes to writing, researching, or coding there’s no shortage of online tools promising to make your life...

canva canva
AI Tools6 days ago

Canva: The Ultimate AI Graphic Design Tool for Stunning Visuals

Overview Canva is a renowned AI powered graphic design tool that allows users to create stunning visuals and graphics effortlessly....

ai tools ai tools
AI Tools1 week ago

Tickeron AI Powered Trading Simplifying Investment Decisions

AI powered stock trading tool that simplifies decision making with real time market insights technical analysis and trading signals

ai tools ai tools
AI Tools1 week ago

Quantconnect: AI Powered Trading Tool for Automated Strategies

An AI powered trading tool offering strategy automation and extensive features

Synthesia Synthesia
AI Tools1 week ago

Synthesia: Great #1 Contender in AI Text to Video Generation?

Overview Synthesia is a powerful Artificial Intelligence voice and video generator tool that allows users to create hyper-realistic videos with...

storychief storychief
AI Tools1 week ago

StoryChief Eliminates 5 Tools in Content Creation

Looking to streamline your content creation and distribution efforts? Discover StoryChief, the all in one platform for bloggers and content... grammerly grammerly
AI Tools2 weeks ago

Grammarly: An Essential Writing Tool for Students, Teachers and Writers

Improve your writing check for grammar correct errors enhance vocabulary and detect plagiarism

ai tools ai tools
AI Tools3 weeks ago

Prisma Transforms Photos with AI Powered Artistic Filters

Overview Prisma Labs is a leading technology company specialised in artificial intelligence and computer vision. With their flagship product, the...

ai tools ai tools
AI Tools2 months ago

Explore Artbreeder, the Innovative AI Image Creator

Discover the innovative AI application, Artbreeder, that combines existing artworks to create unique images. Learn about its features and benefits...


Disclosure: KASBA.AI participates in various affiliate programs which means we may earn a fee on purchases and product links to our partner websites. Copyright © 2024 KASBA.AI