Connect with us

Ethics

AI Ethics Keeps Falling By The Wayside

Keeping up with Artificial Intelligence: roundup of recent stories in the world of ai, ethics and machine learning.

Published

on

This week in AI: AI ethics keeps falling by the wayside

Keeping up with an industry as fast moving as Artificial Intelligence is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

This week in AI, the news cycle finally quieted down a bit ahead of the holiday season. But that’s not to suggest there was a dearth to write about, a blessing and a curse for this sleep-deprived reporter.

A particular headline from the AP caught my eye this morning: “ AI Image Generators are being trained on explicit photos of children.” The gist of the story is, LAION, a dataset used to train many popular open source and commercial AI image generators, including Stable Diffusion and Imagen, contains thousands of images of suspected child sexual abuse. A watchdog group based at Stanford, the Stanford Internet Observatory, worked with anti-abuse charities to identify the illegal material and report the links to law enforcement.

Now, LAION, a non profit, has taken down its training data and pledged to remove the offending materials before republishing it. But the incident serves to underline just how little thought is being put into generative AI products as the competitive pressures ramp up.

Thanks to the proliferation of no code AI model creation tools, it’s becoming frightfully easy to train generative AI on any dataset imaginable. That’s a boon for startups and tech giants alike to get such models out the door. With the lower barrier to entry, however, comes the temptation to cast aside ethics in favour of an accelerated path to market.

Ethics is hard — there’s no denying that. Combing through the thousands of problematic images in LAION, to take this week’s example, won’t happen overnight. And ideally, developing AI ethically involves working with all relevant stakeholders, including organizations that represent groups often marginalized and adversely impacted by AI systems.

The industry is full of examples of AI release decisions made with shareholders, not ethicists, in mind. Take for instance Bing Chat (now Microsoft Copilot), Microsoft’s AI-powered chatbot on Bing, which at launch compared a journalist to Hitler and insulted their appearance. As of October, ChatGPT and Bard, Google’s ChatGPT competitor, were still giving outdated, racist medical advice. And the latest version of OpenAI’s image generator DALL-E shows evidence of Anglo centrism.

Suffice it to say harms are being done in the pursuit of AI superiority — or at least Wall Street’s notion of AI superiority. Perhaps with the passage of the EU’s AI regulations, which threaten fines for noncompliance with certain AI guardrails, there’s some hope on the horizon. But the road ahead is long indeed.

Here are some other AI stories of note from the past few days:

Predictions for AI in 2024: Devin lays out his predictions for AI in 2024, touching on how AI might impact the U.S. primary elections and what’s next for OpenAI, among other topics.

Against pseudanthropy: Devin also wrote suggesting that AI be prohibited from imitating human behaviour.

Microsoft Copilot gets music creation: Copilot, Microsoft’s AI-powered chatbot, can now compose songs thanks to an integration with GenAI music app Suno.

Facial recognition out at Rite Aid: Rite Aid has been banned from using facial recognition tech for five years after the Federal Trade Commission found that the U.S. drugstore giant’s “reckless use of facial surveillance systems” left customers humiliated and put their “sensitive information at risk.”

EU offers compute resources: The EU is expanding its plan, originally announced back in September and kicked off last month, to support homegrown AI startups by providing them with access to processing power for model training on the bloc’s supercomputers.

OpenAI gives board new powers: OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power.

Q&A with UC Berkeley’s Ken Goldberg: For his regular Actuator newsletter, Brian sat down with Ken Goldberg, a professor at UC Berkeley, a startup founder and an accomplished roboticist, to talk humanoid robots and broader trends in the robotics industry.

CIOs take it slow with GenAI: Ron writes that, while CIOs are under pressure to deliver the kind of experiences people are seeing when they play with ChatGPT online, most are taking a deliberate, cautious approach to adopting the tech for the enterprise.

News publishers sue Google over AI: A class action lawsuit filed by several news publishers accuses Google of “siphon[ing] off” news content through anticompetitive means, partly through AI tech like Google’s Search Generative Experience (SGE) and Bard chatbot.

OpenAI inks deal with Axel Springer: Speaking of publishers, OpenAI inked a deal with Axel Springer, the Berlin-based owner of publications including Business Insider and Politico, to train its generative AI models on the publisher’s content and add recent Axel Springer-published articles to ChatGPT.

Google brings Gemini to more places: Google integrated its Gemini models with more of its products and services, including its Vertex AI managed AI dev platform and AI Studio, the company’s tool for authoring AI-based chatbots and other experiences along those lines.

AI More machine learnings

Certainly the wildest (and easiest to misinterpret) research of the last week or two has to be life2vec, a Danish study that uses countless data points in a person’s life to predict what a person is like and when they’ll die. Roughly!

artificial intelligence

The study isn’t claiming oracular accuracy (say that three times fast, by the way) but rather intends to show that if our lives are the sum of our experiences, those paths can be extrapolated somewhat using current machine learning techniques. Between upbringing, education, work, health, hobbies and other metrics, one may reasonably predict not just whether someone is, say, introverted or extroverted, but how these factors may affect life expectancy. We’re not quite at “precrime” levels here but you can bet insurance companies can’t wait to license this work.

Another big claim was made by CMU scientists who created a system called Coscientist, an LLM-based assistant for researchers that can do a lot of lab drudgery autonomously. It’s limited to certain domains of chemistry currently, but just like scientists, models like these will be specialists.

Lead researcher Gabe Gomes told Nature: “The moment I saw a non-organic intelligence be able to autonomously plan, design and execute a chemical reaction that was invented by humans, that was amazing. It was a ‘holy crap’ moment.” Basically it uses an LLM like GPT-4, fine tuned on chemistry documents, to identify common reactions, reagents and procedures and perform them. So you don’t need to tell a lab tech to synthesize four batches of some catalyst — the AI can do it, and you don’t even need to hold its hand.

Google’s AI researchers have had a big week as well, diving into a few interesting frontier domains. FunSearch may sound like Google for kids, but it actually is short for function search, which like Coscientist is able to make and help make mathematical discoveries. Interestingly, to prevent hallucinations, this (like others recently) use a matched pair of AI models a lot like the “old” GAN architecture. One theorizes, the other evaluates.

While FunSearch isn’t going to make any ground-breaking new discoveries, it can take what’s out there and hone or reapply it in new places, so a function that one domain uses but another is unaware of might be used to improve an industry standard algorithm.

StyleDrop is a handy tool for people looking to replicate certain styles via generative imagery. The trouble (as the researchers see it) is that if you have a style in mind (say “pastels”) and describe it, the model will have too many sub-styles of “pastels” to pull from, so the results will be unpredictable. StyleDrop lets you provide an example of the style you’re thinking of, and the model will base its work on that — it’s basically super-efficient fine-tuning.

artificial intelligence

The blog post and paper show that it’s pretty robust, applying a style from any image, whether it’s a photo, painting, cityscape or cat portrait, to any other type of image, even the alphabet (notoriously hard for some reason).

Google is also moving along in the generative video game arena with VideoPoet, which uses an LLM base (like everything else these days… what else are you going to use?) to do a bunch of video tasks, turning text or images to video, extending or stylizing existing video, and so on. The challenge here, as every project makes clear, is not simply making a series of images that relate to one another, but making them coherent over longer periods (like more than a second) and with large movements and changes.

artificial intelligence

VideoPoet moves the ball forward, it seems, though as you can see, the results are still pretty weird. But that’s how these things progress: First they’re inadequate, then they’re weird, then they’re uncanny. Presumably they leave uncanny at some point but no one has really gotten there yet.

On the practical side of things, Swiss researchers have been applying AI models to snow measurement. Normally one would rely on weather stations, but these can be far between and we have all this lovely satellite data, right? Right. So the ETHZ team took public satellite imagery from the Sentinel-2 constellation, but as lead Konrad Schindler puts it, “Just looking at the white bits on the satellite images doesn’t immediately tell us how deep the snow is.”

So they put in terrain data for the whole country from their Federal Office of Topography (like our USGS) and trained up the system to estimate not just based on white bits in imagery but also ground truth data and tendencies like melt patterns. The resulting tech is being commercialized by ExoLabs, which I’m about to contact to learn more.

A word of caution from Stanford, though — as powerful as applications like the above are, note that none of them involve much in the way of human bias. When it comes to health, that suddenly becomes a big problem, and health is where a ton of AI tools are being tested out. Stanford researchers showed that AI models propagate “old medical racial tropes.” GPT-4 doesn’t know whether something is true or not, so it can and does parrot old, disproved claims about groups, such as that black people have lower lung capacity. Nope! Stay on your toes if you’re working with any kind of AI model in health and medicine. 

Read More

AI Governance

Mapping AI Ethics Landscape in United Kingdom

AI Ethics examines the moral and ethical implications of AI systems and their use in a responsible manner

Published

on

ai ethics

Mapping the AI Ethics Landscape and Authorities in the UK: An Introduction to the Key Issues and Considerations in AI Ethics, from Bias and Fairness to Transparency and Accountability

Introduction

AI Ethics involves examining the moral and ethical implications of AI systems, ensuring that they are developed, implemented and used in a responsible and fair manner. This article aims to provide an introduction to the key issues and considerations in AI ethics, ranging from bias and fairness to transparency and accountability.

In the fast paced world of technology, artificial intelligence (AI) has emerged as a powerful force with the potential to revolutionise various industries. However, with great power comes great responsibility and that is where AI ethics steps in.

The Landscape and Authorities

The scope and importance of AI ethics cannot be overstated. As AI systems become more prevalent and influential in our lives, it is crucial to address the ethical implications they pose. The Office for Artificial Intelligence, a division of the UK government, has played a vital role in shaping AI policy, recognising the significance of this field in ensuring the responsible development and use of AI technologies.

To understand the current landscape of AI ethics, it is essential to delve into its historical context and evolution. From the early days of simple algorithms to the complex neural networks of today, AI has come a long way. Milestones such as the development of ethical guidelines and the establishment of institutions dedicated to AI ethics have significantly influenced our understanding of the ethical implications of AI.

One of the most pressing issues in AI ethics is bias and fairness. AI systems are built on data, and if that data contains biases, it can lead to discriminatory outcomes. Defining bias in AI and examining its impacts on various societal sectors is crucial to ensure fair and unbiased decision-making. The Ada Lovelace Institute has been at the forefront of advancing strategies for fairness in AI, providing valuable insights into this important aspect.

Transparency is another key consideration in AI ethics. It is vital to understand how AI systems make decisions to build trust and ensure accountability. However, achieving transparency in AI systems poses several challenges. The Alan Turing Institute, a renowned research institution, has been actively exploring these challenges and working towards creating more transparent AI systems.

In the realm of AI ethics, accountability and responsibility play a vital role. Determining who should be held accountable for AI decisions and understanding the legal and ethical responsibilities involved is crucial. This ensures that AI is used in a manner that prioritises the well-being and rights of individuals and society as a whole.

As AI systems rely on vast amounts of data, privacy concerns are of utmost importance. Striking the right balance between AI advancement and user privacy is a delicate task. The Information Commissioner’s Office (ICO) provides guidelines on data protection and privacy in AI, offering valuable insights into protecting personal information while leveraging AI technologies.

While ethical considerations are essential, AI also presents numerous opportunities for social good. AI applications can be harnessed to address societal challenges and deliver positive impacts. However, it is crucial to ensure that these applications are designed and implemented ethically, taking into account potential unintended consequences. Ethical considerations should guide the development and implementation of AI for social good.

The field of AI ethics is not limited to national boundaries; it requires a global perspective. Different cultural and regional approaches to AI ethics exist, and understanding these perspectives is crucial for fostering international collaboration. The Nuffield Council on Bioethics has been a proponent of international collaboration, emphasizing the need for diverse voices and perspectives in shaping AI ethics.

Looking to the future, emerging trends and challenges in AI ethics need to be anticipated. As AI technologies continue to evolve, new ethical dilemmas will arise. Education and policy have a crucial role to play in shaping the future of ethical AI, ensuring that developers, users, and policymakers are equipped with the necessary knowledge and tools to navigate the ethical landscape.

Summary

In summary, mapping the AI ethics landscape is vital for ensuring the responsible and ethical development and use of AI technologies. This introduction has explored key issues such as bias and fairness, transparency, accountability, privacy concerns, opportunities for social good, global perspectives, and future trends.

It is essential to integrate ethics into AI development to safeguard against unintended consequences and to create AI systems that benefit society as a whole. As technology continues to advance, the ongoing journey of ethical AI reminds us of the importance of continuously evaluating and addressing the ethical implications of our rapidly evolving technological landscape.

Continue Reading

Ethics

Can Artificial Intelligence Replace Humans?

In a world dominated by economic value and increased automation, there is a growing worry about whether AI will replace humans.

Published

on

Can Artificial Intelligence Replace Humans? An Engineering Perspective

In a world dominated by economic value and increased automation, there is a growing worry about whether Artificial Intelligence will replace humans. Yet, many believe that instead of taking jobs away, AI is transforming how we work, unlocking human potential and changing how we innovate and boost productivity. In engineering, it’s crucial to identify roles susceptible to AI and automation, as well as those resilient to change. In this piece, we’ll navigate the dynamic landscape and dive into the impact of artificial intelligence on engineering and related disciplines.

AI Robots Are On The Rise

It was expected that robots would replace low skilled labour, particularly in monotonous and dangerous tasks on factory floors. In reality, human labour remains more cost effective than investing in purchasing and programming robots for most facilities. In addition to robotics hardware, the cost of training is substantial: every time you make a change to the process—traditional robots must be re-trained.

Only in large-scale production, such as smartphone assembly, has robotics become practical due to the high volume. A big breakthrough is on the horizon, though: The latest robotics systems with computer vision and artificial intelligence can train themselves and follow generic commands in natural language. When you can “ask” a robot to separate red “things” and green “things” in plain English, robotics automation has tremendous potential.

AI Algorithmic Copywriting

Copywriting became popular because people realized that persuasive content makes a big impact in grabbing the audience’s attention. Whether it’s on websites, press releases or various media platforms, effective text plays a vital role in conveying official information and engaging with potential customers.

Presently, the work of copywriters may be greatly facilitated by artificial intelligence. While it won’t disappear entirely, AI may empower engineers with specialized knowledge to write compelling articles without hiring other people. AI cannot completely replace copywriters, as the importance of high taste and the quality of the text are crucial factors that AI may struggle to replicate. Nonetheless, the significance of particular knowledge in specific domains is also starting to emerge.

AI Designer

While graphic designers are all-in on adopting AI tech, the realm of Industrial Design clings to manual processes. However, it doesn’t imply that industrial designers are barred from, or should refrain from, the power of AI. For instance, they can extract valuable insights by hiring AI to generate multiple product concepts faster. Alternatively, they can task AI to generate a substantially broader range of product sketches, enhancing the exploration of design possibilities.

At present, AI-generated product renders often fall short of perfection or don’t account for manufacturing limitations. Nevertheless, continuous refinement through iterative prompts is feasible. This process might result in renders and sketches at a reasonable pace, potentially faster than starting from scratch, yet it may not revolutionize the field. While generative AI can assist less skilled designers and expedite the design process, high-end professionals rely on their processes and creativity.

AI 10x Engineers

The hot conversation in Silicon Valley revolves around whether AI can replace large software engineering staff. Big tech companies hire tens of thousands of engineers to write generic and not always ground breaking software. Should programmers be worried about their jobs? It depends.

Website designers are at risk: AI tools can create great-looking web pages with simple prompts. Further customization, such as changing fonts or adding buttons, is even easier than asking your programmer friend.

More advanced systems are developed by hundreds of programmers. Following the 80/20 rule, even before AI, some key members created the most value. Who is a “10x engineer?” A person who can write ten times more lines of code than an average programmer. With AI, 10x engineers can drive even more value. Shall we say 100 lines of code? This way, a team of 10 programmers, together with an AI copilot, a system that engineers can “ask” to write a piece of code—will do more than their entire organization did before.

AI Electronics Engineers

Electrical engineers develop physical products. Every headphone or camera developed in the U.S. was designed by a team of engineers, each earning $150,000 or more per year. This drives the total development cost into several million dollars. Can AI design a new product for a startup? The short answer is NO.

However, it can augment and expedite development. For instance, AI could generate a diagram outlining the major components of a specific device. Engineers spend a lot of time manually selecting components and discussing them with manufacturers. An AI co-pilot may compile a list of the top 10 manufacturers, find their contact info and draft email requests for pricing and documentation. AI can also help perform calculations and solve math problems. After component selection, engineers connect all parts on the Printed Circuit Board (PCB). While it remains a mainly manual process, CAD software is incorporating more and more AI tools to expedite the development.

AI Mechanical Engineers

The development process unfolds as our designer crafts the initial design, and our electronics team translates it into a functional PCB. At a certain point, the design is handed over to Mechanical Engineers, who transform it into a manufacturable enclosure. Integration of electronics and mechanics is a meticulous, hands-on affair. Currently, no AI software exists to seamlessly handle the complexities of developing mechanical devices. Even sophisticated tools fall short, and the integration process remains a manual craft. The limited AI involvement may extend to highlighting potential conflicts or identifying areas where design aesthetics could be improved, but the bulk of the work is done by skilled hands.

AI No One Likes To Dwell On Limitations

Our world is and should be evolving, but technology is far from being able to fully replace highly skilled engineers. Key qualities such as creativity, effective communication and the ability to devise innovative solutions remain invaluable. While AI can complement and enhance certain aspects of work, it is unlikely to completely overshadow the expertise and capabilities of skilled specialists. The reliance on the human touch remains irreplaceable.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives.

Read More

Continue Reading

AI Governance

Montreal Research Hub Spearheads Global AI Ethics Debate

Rapid developments in AI and recent turmoil at OpenAI have brought fresh attention to a key hub of ethics research related to the technology in Montreal, led by Canadian Godfather of AI

Published

on

Montreal research hub spearheads global AI ethics debate

Rapid developments in Artificial Intelligence and recent turmoil at industry powerhouse OpenAI have brought fresh attention to a key hub of ethics research related to the technology in Montreal, led by Canadian “godfather of AI” Yoshua Bengio.

Bengio — who in 2018 shared with Geoffrey Hinton and Yann LeCun the Turing Award for their work on deep learning — says he is worried about the technology leapfrogging human intelligence and capabilities in the not-too-distant future.

Speaking to AFP at his home in Montreal, the professor warned that AI developments are moving at breakneck speed and risked “creating a new species capable of making decisions that harm or even endanger humans.” 

OpenAI’s recent dismissal and then rehiring a few days later of chief executive Sam Altman  who has been accused of downplaying risks in his push to advance its ChatGPT bot illustrates some of the turmoil in the startup sector and fierce competition in the race to commercialize generative AI.

For some time, Bengio has been warning about companies moving too fast without guardrails, “potentially at the public’s expense.”

It is essential, he said, to have “rules that’ll be followed by all companies.”

At a world-first AI summit in Britain in early November, Bengio was tasked with leading a team producing an inaugural report on AI safety.

The aim is to set priorities to inform future work on the security of the cutting-edge technology.

Artificial Intelligence Society and AI

The renowned AI academic has brought together a “critical mass of AI researchers” (1,000+) through his Mila research institute, located in a former working-class neighbourhood of Montreal. His neighbors include AI research facilities of American tech giants Microsoft, Meta, IBM and Google.

“This concentration of experts in artificial intelligence, which is greater than anywhere else in the world,” is what attracted Google, says Hugo Larochelle, the hoodie-wearing scientific director of the Silicon Valley giant’s AI subsidiary Deepmind.

Early on, these researchers began thinking about the future of AI, and consultations with the public and researchers from all disciplines led in 2018 to a global AI charter called the Montreal Declaration for a Responsible Development of Artificial Intelligence.

“We knew early on that the scientific community needed to think about the integration of AI into society,” explained Guillaume Macaux, vice-president of OBVIA, an international observatory studying the social impacts of AI.

Its 220 researchers advise government policies and raise public awareness of the possible positive effects and negative impacts of this cutting-edge technology.

Artificial Intelligence Art ‘Demystifying AI’

Artists too are playing a part in “demystifying AI,” says Sandra Rodriguez, who splits her time making art in Montreal and teaching digital media at the prestigious Massachusetts Institute of Technology (MIT) in the United States.

She showed off to AFP her latest art installation.

Entering a breathtaking futuristic world via a virtual reality (VR) headset, the public is able to converse with a bot inspired by American linguist Noam Chomsky.

Voice and text answers to queries appear simultaneously. With the touch of a finger, lists of alternative responses considered by the AI with their associated percentage gradient pop up.

“You quickly realize that it’s actually just an algorithm,” said Rodriguez.

“Montreal is a fantastic playground” for exploring the potential and limits of artificial intelligence, as well as “debating (related) ethical and societal issues,” she told AFP.

According to Rodriguez, art becomes “necessary more than ever” to invite “a wider public to ask questions about the issues of artificial intelligence which will affect them tomorrow.”

Read More

Continue Reading

Latest Reviews

chatgpt chatgpt
ChatGPT2 weeks ago

ChatGPT vs Gemini vs Copilot

Introduction When it comes to writing, researching, or coding there’s no shortage of online tools promising to make your life...

ai tools ai tools
Reviews2 weeks ago

Top 10 AI Voice Generators Entering 2024

We have entered a new era of digital narration gone are the days of robotic and monotonous speech

ai tools ai tools
AI Tools2 weeks ago

Prisma Transforms Photos with AI Powered Artistic Filters

Overview Prisma Labs is a leading technology company specialised in artificial intelligence and computer vision. With their flagship product, the...

ai tools ai tools
AI Tools2 weeks ago

Tickeron AI Powered Trading Simplifying Investment Decisions

AI powered stock trading tool that simplifies decision making with real time market insights technical analysis and trading signals

Synthesia Synthesia
AI Tools2 weeks ago

Synthesia: Great #1 Contender in AI Text to Video Generation?

Overview Synthesia is a powerful Artificial Intelligence voice and video generator tool that allows users to create hyper-realistic videos with...

canva canva
AI Tools3 weeks ago

Canva: The Ultimate AI Graphic Design Tool for Stunning Visuals

Overview Canva is a renowned AI powered graphic design tool that allows users to create stunning visuals and graphics effortlessly....

ai tools ai tools
AI Tools3 weeks ago

Quantconnect: AI Powered Trading Tool for Automated Strategies

An AI powered trading tool offering strategy automation and extensive features

storychief storychief
AI Tools3 weeks ago

StoryChief Eliminates 5 Tools in Content Creation

Looking to streamline your content creation and distribution efforts? Discover StoryChief, the all in one platform for bloggers and content...

kasba.ai grammerly kasba.ai grammerly
AI Tools1 month ago

Grammarly: An Essential Writing Tool for Students, Teachers and Writers

Improve your writing check for grammar correct errors enhance vocabulary and detect plagiarism

ai tools ai tools
AI Tools2 months ago

Explore Artbreeder, the Innovative AI Image Creator

Discover the innovative AI application, Artbreeder, that combines existing artworks to create unique images. Learn about its features and benefits...

Trending

Disclosure: KASBA.AI participates in various affiliate programs which means we may earn a fee on purchases and product links to our partner websites. Copyright © 2024 KASBA.AI