Connect with us

Ethics

Digital Ethics Summit 2023: The year in AI

Digital Ethics Summit 2023: well intentioned ethical principles and frameworks for AI must now be translated into concrete practical measures.

Published

on

ai ethics

Well intentioned ethical principles and frameworks for Artificial Intelligence systems must now be translated into concrete practical measures, but those measures cannot be dictated to the rest of the world by rich countries developing the technology.

Speaking at TechUK’s seventh annual Digital Ethics Summit in December, panelists reflected on developments in AI Governance since the release of generative models such as ChatGPT at the end of 2022, which put the technology in front of millions of users for the first time.

Major developments since then include the UK government’s AI whitepaper; the UK’s hosting of the world’s first AI Safety Summit at the start of November 2023; the European Union’s (EU) progress on its AI Act; the White House’s Executive Order on safe, secure and trustworthy AI; and China’s passing of multiple pieces of AI-related legislation throughout the year.  

For many in attendance, all of the talk around ethical and responsible AI now needs to be translated into concrete policies, but there is concern that the discussion around how to control AI is overly dominated by rich countries from the global north.  

A consensus emerged that while the growing intensity of the international debate around AI is a sign of positive progress, there must also be a greater emphasis placed on AI as a social-technical system, which means reckoning with the political economy of the technology and dealing with the practical effects of its operation in real world settings.

Reflecting on the past year of AI developments, Andrew Strait, an associate director at the Ada Lovelace Institute, said he was shocked at the time of ChatGPT’s release, not because the tool was not impressive or exciting, but because it showed how weak industry responsibility practices are.

Citing ChatGPT’s release in spite of concerns raised internally by OpenAI staff and the subsequent arms race it prompted between Google and Microsoft over generative AI, he added that “these were run as experiments on the public, and many of the issues these systems had at their release are still not addressed”, including how models are assessed for risk.

Artificial intelligence Moment of Contradiction

Strait continued: “I find it a very strange moment of contradiction. We have had our prime minister in the UK say, ‘This is one of the most dangerous technologies, we have to do something, but it’s too soon to regulate.’ That means we won’t have regulation in this country until 2025 at the earliest.” He added that many of the issues we’re now faced with in terms of AI were entirely foreseeable.

Highlighting the “extreme expansion” of non-consensual sexual imagery online as a result of Generative AI (GenAI), Strait said situations like this were being predicted as far back as 2017 when he worked at DeepMind: “I’m shocked at the state of our governance,” he said.

Commenting on the consensus that emerged during the UK AI Safety Summit around the need for further research and the creation of an international testing and evaluation regime – with many positing the Intergovernmental Panel on Climate Change (IPCC) as the example to follow – Strait added that the IPCC was essentially founded as a way of turning a regulatory problem into a research problem. “They’ve now spent 30 more years researching and are yet to address [the problem],” he said.

Alex Chambers, platform lead at venture capital firm Air Street Capital, also warned against allowing big tech firms to dominate regulatory discussions, particularly when it comes to the debate around open versus closed source.

“There is a war on open source at the moment, and that war is being led by big technology companies that … are trying to snuff out the open source ecosystem,” he said, adding that “it’s not entirely surprising” given many of them have identified open source as one of their biggest competitive risks.

“That’s why I’m nervous about the regulatory conversation, because … whenever I hear people talk about the need to engage more stakeholders with more stakeholder consultation, what I tend to hear is, ‘Let’s get a bunch of incumbents in the room and ask them if they want change,’ because that’s what it usually turns into when regulators do that in this country,” said Chambers.

He added that he’s concerned the regulatory conversation around AI “has been hijacked by a small number of very well-funded, motivated bodies”, and that there should be much more focus in 2024 on breaking the dependency most smaller firms have on the technical infrastructure of large companies dominating AI’s development and deployment.

Artificial intelligence The UK approach

Pointing to advances in GenAI over the past year, Lizzie Greenhalgh, head of AI regulation at DSIT, said it “has proved the value and the strength” of the adaptable, context-based regulatory approach set out by the UK government in its March 2023 AI Whitepaper.

As part of this proposed “pro-innovation” framework, the government said it would empower existing regulators – including the Information Commissioner’s Office (ICO), the Health and Safety Executive, Equality and Human Rights Commission (EHRC) and Competition and Markets Authority – to create tailored, context-specific rules that suit the ways AI is being used in the sectors they scrutinise.

“We were always clear the whitepaper was an early step and this is a fast-evolving tech,” said Greenhalgh. “We have been able to move with the technology, but we know it’s not a done deal just yet.”

For Tim Clement-Jones, however, “The stance of the current government is that regulation is the enemy of innovation,” and attempts to minimise the amount of regulation in this space are “out of step” with other jurisdictions.

Like Strait, Clement-Jones said he feels “we’ve gone backwards in many ways”, citing the speed with which the US and EU have moved on AI in comparison with the UK, as well as the lack of regulation across large swathes of the UK economy.

“The regulators don’t cover every sector, so you have to have some horizontality,” he said.

Commenting on the Safety Summit, Clement-Jones questioned the wisdom of focusing on “frontier” risks such as the existential threat of AI when other dangers such as bias and misinformation are “in front of our very eyes … we don’t have to wait for the apocalypse, quite frankly, before we regulate”.

Artificial intelligence Convening role

According to Hetan Shah, chief executive at the British Academy, while he shares similar criticisms about the focus on speculative AI risks, the Summit was a success “in its own terms” of bringing Western powers and China around the same table, and it’s important the UK plays some kind of convening role in AI regulation given most firms developing it are either from China or the US.

However, he added that while the flexible structure of the regulatory approach set out by the UK government in its whitepaper is helpful in adapting to new technological developments, the UK’s problem has always been in operationalising its policies.

He added the lack of discussion about the whitepaper at the AI Safety Summit gave him the impression the government “had almost lost confidence in its own approach”, and that now is the time for movement with a sense of urgency. “We’ve got to start getting into specifics, which are both horizontal and vertical,” he said. “The problems of misinformation are different to the problems of privacy, which are different to the problems of bias, which are different to the problems of what would an algorithm do for financial trading, which is different to what impact will it have on jobs in the transport system, et cetera, so we need a lot of that drilling down … there’s no straightforward way of doing it.”

Citing Pew research that shows US citizens are now more concerned than excited about AI for the first time, UK information commissioner John Edwards said during his keynote address that “if people don’t trust AI, they’re less likely to use it, resulting in reduced benefits and less growth or innovation in society as a whole”.

He added that the UK’s existing regulatory framework allows for “firm and robust regulatory interventions, as well as innovation”, and that “there is no regulatory lacuna here – the same rules apply [in data protection], as they always had done”.

Highlighting the Ada Lovelace Institute’s analysis of the UK’s regulation – which showed “large swathes” of the UK economy are either unregulated or only partially regulated – Strait challenged the view there was no regulatory lacuna.

“Existing sector-specific regulations have enormous gaps,” he said. “It does not provide protections, and even in the case of the EHRC and ICO, which do have the powers to do investigations, they historically haven’t used them. That’s for a few reasons. One is capacity, another is resources, and another is, frankly, a government that’s very opposed to the enforcement of regulation. If we don’t change those three things, we’ll continue to operate in an environment where we have unsafe products that are unregulated on the market. And that’s not helping anybody.”

Artificial Intelligence Translating Words to Action

Gabriela Ramos, assistant director-general for social and human sciences at Unesco, said that although there is consensus emerging among the “big players” globally around the need to regulate for AI-related risks, all of the talk around ethical and responsible AI now needs to be translated into concrete policies.

She added that any legal institutional frameworks must account for the different rates of AI development across countries globally.

Camille Ford, a researcher in global governance at the Centre for European Policy Studies (CEPS), shared similar sentiments on the emerging consensus for AI regulation, noting that the significant number of ethical AI frameworks published by industry, academia, government and others in recent years all tend to emphasise the need for transparency, reliability and trustworthiness, justice and equality, privacy, and accountability and liability.

Commenting on the UK’s AI Safety Summit, Ford added that there needs to be further international debate on the concept of AI safety, as the conception of safety put forward at the summit primarily focused on existential or catastrophic AI risks through a limited technological lens.

“This seems to be the space that the UK is carving out for itself globally, and there’s obviously going to be more safety summits in the future [in South Korea and France],” she said. “So, this is a conception of safety that is going to stick and that is getting institutionalised at different levels.”

For Ford, future conversations around AI need to look at the risk of actually existing AI systems as they are now, “rather than overly focusing on designing technical mitigations for risks that don’t exist yet”.

Artificial Intelligence Political Economy of AI

Highlighting Seth Lazar and Alondra Nelson’s paper, AI safety on whose terms?, Ford added that AI safety needs to be thought of more broadly in socio-technical terms, which means taking account of “the political economy of AI”.

Noting this would include its impact on the environment, the working conditions of data labelers that run and maintain the systems, as well as how data is collected and processed, Ford added that such an approach “necessitates understanding people and societies, and not just the technology”.

Pointing to the growing intensity of the international debate around AI as a sign of positive progress, Zeynep Engin, chair and director of the Data for Policy governance forum, added that while there is a lot of talk about how to make AI more responsible and ethical, these are “fuzzy terms” to put into practice.

She added that while there are discussions taking place across the world for how to deal with AI, the direction of travel is dominated by rich countries from the global north where the technology is primarily being developed.

“We can already see that the dominance is from the global north, and it becomes quite self-selective,” said Engin. “When you see that when things are left to grow organically [because of where the tech firms are], then do you leave a big chunk of the global population behind?” She added that it’s difficult to create a regulatory environment that promotes social good and reflects the dynamic nature of the technology when so many are excluded from the conversation.

“I’m not just saying this from an equality perspective … but I also think if you’re saying AI regulation is as big a problem as climate change, and if you’re going to use this technology for social good, for public good in general, then you really need to have this conversation in a much more balanced way,” said Engin.

Noting that international AI ethics and governance forums are heavily concentrated in “wealthy, like-minded nations”, Ford said there was a risk of this approach being “overly institutionalised” at the expense of other actors who have not yet been as much a part of the conversation.

For Engin, a missing piece of the puzzle in the flurry of AI governance activity over the past year is cross-border community-led initiatives, which can help put more practical issues on the table.

Read More

AI Governance

Mapping AI Ethics Landscape in United Kingdom

AI Ethics examines the moral and ethical implications of AI systems and their use in a responsible manner

Published

on

ai ethics

Mapping the AI Ethics Landscape and Authorities in the UK: An Introduction to the Key Issues and Considerations in AI Ethics, from Bias and Fairness to Transparency and Accountability

Introduction

AI Ethics involves examining the moral and ethical implications of AI systems, ensuring that they are developed, implemented and used in a responsible and fair manner. This article aims to provide an introduction to the key issues and considerations in AI ethics, ranging from bias and fairness to transparency and accountability.

In the fast paced world of technology, artificial intelligence (AI) has emerged as a powerful force with the potential to revolutionise various industries. However, with great power comes great responsibility and that is where AI ethics steps in.

The Landscape and Authorities

The scope and importance of AI ethics cannot be overstated. As AI systems become more prevalent and influential in our lives, it is crucial to address the ethical implications they pose. The Office for Artificial Intelligence, a division of the UK government, has played a vital role in shaping AI policy, recognising the significance of this field in ensuring the responsible development and use of AI technologies.

To understand the current landscape of AI ethics, it is essential to delve into its historical context and evolution. From the early days of simple algorithms to the complex neural networks of today, AI has come a long way. Milestones such as the development of ethical guidelines and the establishment of institutions dedicated to AI ethics have significantly influenced our understanding of the ethical implications of AI.

One of the most pressing issues in AI ethics is bias and fairness. AI systems are built on data, and if that data contains biases, it can lead to discriminatory outcomes. Defining bias in AI and examining its impacts on various societal sectors is crucial to ensure fair and unbiased decision-making. The Ada Lovelace Institute has been at the forefront of advancing strategies for fairness in AI, providing valuable insights into this important aspect.

Transparency is another key consideration in AI ethics. It is vital to understand how AI systems make decisions to build trust and ensure accountability. However, achieving transparency in AI systems poses several challenges. The Alan Turing Institute, a renowned research institution, has been actively exploring these challenges and working towards creating more transparent AI systems.

In the realm of AI ethics, accountability and responsibility play a vital role. Determining who should be held accountable for AI decisions and understanding the legal and ethical responsibilities involved is crucial. This ensures that AI is used in a manner that prioritises the well-being and rights of individuals and society as a whole.

As AI systems rely on vast amounts of data, privacy concerns are of utmost importance. Striking the right balance between AI advancement and user privacy is a delicate task. The Information Commissioner’s Office (ICO) provides guidelines on data protection and privacy in AI, offering valuable insights into protecting personal information while leveraging AI technologies.

While ethical considerations are essential, AI also presents numerous opportunities for social good. AI applications can be harnessed to address societal challenges and deliver positive impacts. However, it is crucial to ensure that these applications are designed and implemented ethically, taking into account potential unintended consequences. Ethical considerations should guide the development and implementation of AI for social good.

The field of AI ethics is not limited to national boundaries; it requires a global perspective. Different cultural and regional approaches to AI ethics exist, and understanding these perspectives is crucial for fostering international collaboration. The Nuffield Council on Bioethics has been a proponent of international collaboration, emphasizing the need for diverse voices and perspectives in shaping AI ethics.

Looking to the future, emerging trends and challenges in AI ethics need to be anticipated. As AI technologies continue to evolve, new ethical dilemmas will arise. Education and policy have a crucial role to play in shaping the future of ethical AI, ensuring that developers, users, and policymakers are equipped with the necessary knowledge and tools to navigate the ethical landscape.

Summary

In summary, mapping the AI ethics landscape is vital for ensuring the responsible and ethical development and use of AI technologies. This introduction has explored key issues such as bias and fairness, transparency, accountability, privacy concerns, opportunities for social good, global perspectives, and future trends.

It is essential to integrate ethics into AI development to safeguard against unintended consequences and to create AI systems that benefit society as a whole. As technology continues to advance, the ongoing journey of ethical AI reminds us of the importance of continuously evaluating and addressing the ethical implications of our rapidly evolving technological landscape.

Continue Reading

Ethics

Can Artificial Intelligence Replace Humans?

In a world dominated by economic value and increased automation, there is a growing worry about whether AI will replace humans.

Published

on

Can Artificial Intelligence Replace Humans? An Engineering Perspective

In a world dominated by economic value and increased automation, there is a growing worry about whether Artificial Intelligence will replace humans. Yet, many believe that instead of taking jobs away, AI is transforming how we work, unlocking human potential and changing how we innovate and boost productivity. In engineering, it’s crucial to identify roles susceptible to AI and automation, as well as those resilient to change. In this piece, we’ll navigate the dynamic landscape and dive into the impact of artificial intelligence on engineering and related disciplines.

AI Robots Are On The Rise

It was expected that robots would replace low skilled labour, particularly in monotonous and dangerous tasks on factory floors. In reality, human labour remains more cost effective than investing in purchasing and programming robots for most facilities. In addition to robotics hardware, the cost of training is substantial: every time you make a change to the process—traditional robots must be re-trained.

Only in large-scale production, such as smartphone assembly, has robotics become practical due to the high volume. A big breakthrough is on the horizon, though: The latest robotics systems with computer vision and artificial intelligence can train themselves and follow generic commands in natural language. When you can “ask” a robot to separate red “things” and green “things” in plain English, robotics automation has tremendous potential.

AI Algorithmic Copywriting

Copywriting became popular because people realized that persuasive content makes a big impact in grabbing the audience’s attention. Whether it’s on websites, press releases or various media platforms, effective text plays a vital role in conveying official information and engaging with potential customers.

Presently, the work of copywriters may be greatly facilitated by artificial intelligence. While it won’t disappear entirely, AI may empower engineers with specialized knowledge to write compelling articles without hiring other people. AI cannot completely replace copywriters, as the importance of high taste and the quality of the text are crucial factors that AI may struggle to replicate. Nonetheless, the significance of particular knowledge in specific domains is also starting to emerge.

AI Designer

While graphic designers are all-in on adopting AI tech, the realm of Industrial Design clings to manual processes. However, it doesn’t imply that industrial designers are barred from, or should refrain from, the power of AI. For instance, they can extract valuable insights by hiring AI to generate multiple product concepts faster. Alternatively, they can task AI to generate a substantially broader range of product sketches, enhancing the exploration of design possibilities.

At present, AI-generated product renders often fall short of perfection or don’t account for manufacturing limitations. Nevertheless, continuous refinement through iterative prompts is feasible. This process might result in renders and sketches at a reasonable pace, potentially faster than starting from scratch, yet it may not revolutionize the field. While generative AI can assist less skilled designers and expedite the design process, high-end professionals rely on their processes and creativity.

AI 10x Engineers

The hot conversation in Silicon Valley revolves around whether AI can replace large software engineering staff. Big tech companies hire tens of thousands of engineers to write generic and not always ground breaking software. Should programmers be worried about their jobs? It depends.

Website designers are at risk: AI tools can create great-looking web pages with simple prompts. Further customization, such as changing fonts or adding buttons, is even easier than asking your programmer friend.

More advanced systems are developed by hundreds of programmers. Following the 80/20 rule, even before AI, some key members created the most value. Who is a “10x engineer?” A person who can write ten times more lines of code than an average programmer. With AI, 10x engineers can drive even more value. Shall we say 100 lines of code? This way, a team of 10 programmers, together with an AI copilot, a system that engineers can “ask” to write a piece of code—will do more than their entire organization did before.

AI Electronics Engineers

Electrical engineers develop physical products. Every headphone or camera developed in the U.S. was designed by a team of engineers, each earning $150,000 or more per year. This drives the total development cost into several million dollars. Can AI design a new product for a startup? The short answer is NO.

However, it can augment and expedite development. For instance, AI could generate a diagram outlining the major components of a specific device. Engineers spend a lot of time manually selecting components and discussing them with manufacturers. An AI co-pilot may compile a list of the top 10 manufacturers, find their contact info and draft email requests for pricing and documentation. AI can also help perform calculations and solve math problems. After component selection, engineers connect all parts on the Printed Circuit Board (PCB). While it remains a mainly manual process, CAD software is incorporating more and more AI tools to expedite the development.

AI Mechanical Engineers

The development process unfolds as our designer crafts the initial design, and our electronics team translates it into a functional PCB. At a certain point, the design is handed over to Mechanical Engineers, who transform it into a manufacturable enclosure. Integration of electronics and mechanics is a meticulous, hands-on affair. Currently, no AI software exists to seamlessly handle the complexities of developing mechanical devices. Even sophisticated tools fall short, and the integration process remains a manual craft. The limited AI involvement may extend to highlighting potential conflicts or identifying areas where design aesthetics could be improved, but the bulk of the work is done by skilled hands.

AI No One Likes To Dwell On Limitations

Our world is and should be evolving, but technology is far from being able to fully replace highly skilled engineers. Key qualities such as creativity, effective communication and the ability to devise innovative solutions remain invaluable. While AI can complement and enhance certain aspects of work, it is unlikely to completely overshadow the expertise and capabilities of skilled specialists. The reliance on the human touch remains irreplaceable.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives.

Read More

Continue Reading

Ethics

AI Ethics Keeps Falling By The Wayside

Keeping up with Artificial Intelligence: roundup of recent stories in the world of ai, ethics and machine learning.

Published

on

This week in AI: AI ethics keeps falling by the wayside

Keeping up with an industry as fast moving as Artificial Intelligence is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

This week in AI, the news cycle finally quieted down a bit ahead of the holiday season. But that’s not to suggest there was a dearth to write about, a blessing and a curse for this sleep-deprived reporter.

A particular headline from the AP caught my eye this morning: “ AI Image Generators are being trained on explicit photos of children.” The gist of the story is, LAION, a dataset used to train many popular open source and commercial AI image generators, including Stable Diffusion and Imagen, contains thousands of images of suspected child sexual abuse. A watchdog group based at Stanford, the Stanford Internet Observatory, worked with anti-abuse charities to identify the illegal material and report the links to law enforcement.

Now, LAION, a non profit, has taken down its training data and pledged to remove the offending materials before republishing it. But the incident serves to underline just how little thought is being put into generative AI products as the competitive pressures ramp up.

Thanks to the proliferation of no code AI model creation tools, it’s becoming frightfully easy to train generative AI on any dataset imaginable. That’s a boon for startups and tech giants alike to get such models out the door. With the lower barrier to entry, however, comes the temptation to cast aside ethics in favour of an accelerated path to market.

Ethics is hard — there’s no denying that. Combing through the thousands of problematic images in LAION, to take this week’s example, won’t happen overnight. And ideally, developing AI ethically involves working with all relevant stakeholders, including organizations that represent groups often marginalized and adversely impacted by AI systems.

The industry is full of examples of AI release decisions made with shareholders, not ethicists, in mind. Take for instance Bing Chat (now Microsoft Copilot), Microsoft’s AI-powered chatbot on Bing, which at launch compared a journalist to Hitler and insulted their appearance. As of October, ChatGPT and Bard, Google’s ChatGPT competitor, were still giving outdated, racist medical advice. And the latest version of OpenAI’s image generator DALL-E shows evidence of Anglo centrism.

Suffice it to say harms are being done in the pursuit of AI superiority — or at least Wall Street’s notion of AI superiority. Perhaps with the passage of the EU’s AI regulations, which threaten fines for noncompliance with certain AI guardrails, there’s some hope on the horizon. But the road ahead is long indeed.

Here are some other AI stories of note from the past few days:

Predictions for AI in 2024: Devin lays out his predictions for AI in 2024, touching on how AI might impact the U.S. primary elections and what’s next for OpenAI, among other topics.

Against pseudanthropy: Devin also wrote suggesting that AI be prohibited from imitating human behaviour.

Microsoft Copilot gets music creation: Copilot, Microsoft’s AI-powered chatbot, can now compose songs thanks to an integration with GenAI music app Suno.

Facial recognition out at Rite Aid: Rite Aid has been banned from using facial recognition tech for five years after the Federal Trade Commission found that the U.S. drugstore giant’s “reckless use of facial surveillance systems” left customers humiliated and put their “sensitive information at risk.”

EU offers compute resources: The EU is expanding its plan, originally announced back in September and kicked off last month, to support homegrown AI startups by providing them with access to processing power for model training on the bloc’s supercomputers.

OpenAI gives board new powers: OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power.

Q&A with UC Berkeley’s Ken Goldberg: For his regular Actuator newsletter, Brian sat down with Ken Goldberg, a professor at UC Berkeley, a startup founder and an accomplished roboticist, to talk humanoid robots and broader trends in the robotics industry.

CIOs take it slow with GenAI: Ron writes that, while CIOs are under pressure to deliver the kind of experiences people are seeing when they play with ChatGPT online, most are taking a deliberate, cautious approach to adopting the tech for the enterprise.

News publishers sue Google over AI: A class action lawsuit filed by several news publishers accuses Google of “siphon[ing] off” news content through anticompetitive means, partly through AI tech like Google’s Search Generative Experience (SGE) and Bard chatbot.

OpenAI inks deal with Axel Springer: Speaking of publishers, OpenAI inked a deal with Axel Springer, the Berlin-based owner of publications including Business Insider and Politico, to train its generative AI models on the publisher’s content and add recent Axel Springer-published articles to ChatGPT.

Google brings Gemini to more places: Google integrated its Gemini models with more of its products and services, including its Vertex AI managed AI dev platform and AI Studio, the company’s tool for authoring AI-based chatbots and other experiences along those lines.

AI More machine learnings

Certainly the wildest (and easiest to misinterpret) research of the last week or two has to be life2vec, a Danish study that uses countless data points in a person’s life to predict what a person is like and when they’ll die. Roughly!

artificial intelligence

The study isn’t claiming oracular accuracy (say that three times fast, by the way) but rather intends to show that if our lives are the sum of our experiences, those paths can be extrapolated somewhat using current machine learning techniques. Between upbringing, education, work, health, hobbies and other metrics, one may reasonably predict not just whether someone is, say, introverted or extroverted, but how these factors may affect life expectancy. We’re not quite at “precrime” levels here but you can bet insurance companies can’t wait to license this work.

Another big claim was made by CMU scientists who created a system called Coscientist, an LLM-based assistant for researchers that can do a lot of lab drudgery autonomously. It’s limited to certain domains of chemistry currently, but just like scientists, models like these will be specialists.

Lead researcher Gabe Gomes told Nature: “The moment I saw a non-organic intelligence be able to autonomously plan, design and execute a chemical reaction that was invented by humans, that was amazing. It was a ‘holy crap’ moment.” Basically it uses an LLM like GPT-4, fine tuned on chemistry documents, to identify common reactions, reagents and procedures and perform them. So you don’t need to tell a lab tech to synthesize four batches of some catalyst — the AI can do it, and you don’t even need to hold its hand.

Google’s AI researchers have had a big week as well, diving into a few interesting frontier domains. FunSearch may sound like Google for kids, but it actually is short for function search, which like Coscientist is able to make and help make mathematical discoveries. Interestingly, to prevent hallucinations, this (like others recently) use a matched pair of AI models a lot like the “old” GAN architecture. One theorizes, the other evaluates.

While FunSearch isn’t going to make any ground-breaking new discoveries, it can take what’s out there and hone or reapply it in new places, so a function that one domain uses but another is unaware of might be used to improve an industry standard algorithm.

StyleDrop is a handy tool for people looking to replicate certain styles via generative imagery. The trouble (as the researchers see it) is that if you have a style in mind (say “pastels”) and describe it, the model will have too many sub-styles of “pastels” to pull from, so the results will be unpredictable. StyleDrop lets you provide an example of the style you’re thinking of, and the model will base its work on that — it’s basically super-efficient fine-tuning.

artificial intelligence

The blog post and paper show that it’s pretty robust, applying a style from any image, whether it’s a photo, painting, cityscape or cat portrait, to any other type of image, even the alphabet (notoriously hard for some reason).

Google is also moving along in the generative video game arena with VideoPoet, which uses an LLM base (like everything else these days… what else are you going to use?) to do a bunch of video tasks, turning text or images to video, extending or stylizing existing video, and so on. The challenge here, as every project makes clear, is not simply making a series of images that relate to one another, but making them coherent over longer periods (like more than a second) and with large movements and changes.

artificial intelligence

VideoPoet moves the ball forward, it seems, though as you can see, the results are still pretty weird. But that’s how these things progress: First they’re inadequate, then they’re weird, then they’re uncanny. Presumably they leave uncanny at some point but no one has really gotten there yet.

On the practical side of things, Swiss researchers have been applying AI models to snow measurement. Normally one would rely on weather stations, but these can be far between and we have all this lovely satellite data, right? Right. So the ETHZ team took public satellite imagery from the Sentinel-2 constellation, but as lead Konrad Schindler puts it, “Just looking at the white bits on the satellite images doesn’t immediately tell us how deep the snow is.”

So they put in terrain data for the whole country from their Federal Office of Topography (like our USGS) and trained up the system to estimate not just based on white bits in imagery but also ground truth data and tendencies like melt patterns. The resulting tech is being commercialized by ExoLabs, which I’m about to contact to learn more.

A word of caution from Stanford, though — as powerful as applications like the above are, note that none of them involve much in the way of human bias. When it comes to health, that suddenly becomes a big problem, and health is where a ton of AI tools are being tested out. Stanford researchers showed that AI models propagate “old medical racial tropes.” GPT-4 doesn’t know whether something is true or not, so it can and does parrot old, disproved claims about groups, such as that black people have lower lung capacity. Nope! Stay on your toes if you’re working with any kind of AI model in health and medicine. 

Read More

Continue Reading

Latest Reviews

chatgpt chatgpt
ChatGPT2 weeks ago

ChatGPT vs Gemini vs Copilot

Introduction When it comes to writing, researching, or coding there’s no shortage of online tools promising to make your life...

ai tools ai tools
Reviews2 weeks ago

Top 10 AI Voice Generators Entering 2024

We have entered a new era of digital narration gone are the days of robotic and monotonous speech

ai tools ai tools
AI Tools2 weeks ago

Prisma Transforms Photos with AI Powered Artistic Filters

Overview Prisma Labs is a leading technology company specialised in artificial intelligence and computer vision. With their flagship product, the...

ai tools ai tools
AI Tools2 weeks ago

Tickeron AI Powered Trading Simplifying Investment Decisions

AI powered stock trading tool that simplifies decision making with real time market insights technical analysis and trading signals

Synthesia Synthesia
AI Tools2 weeks ago

Synthesia: Great #1 Contender in AI Text to Video Generation?

Overview Synthesia is a powerful Artificial Intelligence voice and video generator tool that allows users to create hyper-realistic videos with...

canva canva
AI Tools3 weeks ago

Canva: The Ultimate AI Graphic Design Tool for Stunning Visuals

Overview Canva is a renowned AI powered graphic design tool that allows users to create stunning visuals and graphics effortlessly....

ai tools ai tools
AI Tools3 weeks ago

Quantconnect: AI Powered Trading Tool for Automated Strategies

An AI powered trading tool offering strategy automation and extensive features

storychief storychief
AI Tools3 weeks ago

StoryChief Eliminates 5 Tools in Content Creation

Looking to streamline your content creation and distribution efforts? Discover StoryChief, the all in one platform for bloggers and content...

kasba.ai grammerly kasba.ai grammerly
AI Tools1 month ago

Grammarly: An Essential Writing Tool for Students, Teachers and Writers

Improve your writing check for grammar correct errors enhance vocabulary and detect plagiarism

ai tools ai tools
AI Tools2 months ago

Explore Artbreeder, the Innovative AI Image Creator

Discover the innovative AI application, Artbreeder, that combines existing artworks to create unique images. Learn about its features and benefits...

Trending

Disclosure: KASBA.AI participates in various affiliate programs which means we may earn a fee on purchases and product links to our partner websites. Copyright © 2024 KASBA.AI