Connect with us

AI Governance

Making Artificial Intelligence Fit Human Futures

Lord Chris Holmes has introduced a Private Member’s Bill on AI regulation in Parliament. He explains the importance and goals of the bill.



Making artificial intelligence fit for all our human futures

Artificial Intelligence Lord Chris Holmes explains why he has introduced a Private Member’s Bill on AI regulation for discussion in Parliament – and the important precedents he hopes it will set

By Lord Chris Holmes, Published: 28 Nov 2023

It was a privilege to introduce my Private Member’s Bill on Artificial Intelligence (AI) into the House of Lords this month. In the Bill I have tried to incorporate much of what I believe we need to sort, in short order, when it comes to AI.

Every King’s Speech offers members of both Houses of Parliament the opportunity to put a potential law forward for parliamentary consideration. It’s a ballot – so Lady (or indeed Lord) Luck needs to be on your side. If you come in the top 25 or so, then your Bill has a good chance of getting a full hearing at “second reading” and from there a chance – slim but still a chance – of making it onto the statute book.

This year I was one of the lucky ones and my proposed private members’ bill – The Artificial Intelligence [Regulation] Bill – was drawn near the top and has been introduced, with second reading to be scheduled in the new year.

I drafted the Bill with the essential principles of trust, transparency, inclusion, innovation, interoperability, public engagement, and accountability running through it.

Artificial Intelligence The AI Authority

The first section sets out the requirements for an AI Authority. In no sense do I see this as the creation of an outsized, do-it-all, regulator. Rather, the role is one of coordination, assuring all the relevant, existing regulators address their obligations in relation to AI.

Setting up AI regulation in this horizontal rather than vertical fashion should give a better chance of alignment. Ensuring a consistent approach across industries and applications rather than the, potentially, piecemeal approach likely if regulation is left only to individual regulators. This horizontal view should also allow for a clearer gap analysis to be drawn out and addressed.

The proposed AI Authority should also undertake a review of all relevant existing legislation, such as consumer protection and product safety, to assess its suitability to address the challenges and opportunities presented by AI.

It is critical that the AI Authority is both agile and adaptable and very much forward facing. To enable this, it must conduct horizon-scanning, including by consulting the AI industry, to inform a coherent response to emerging AI technology trends.

Building on the UK’s sound basis for principles-based regulators, AI regulation should deliver safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

Artificial intelligence Transparency and testing

Turning to business, any business which develops, deploys, or uses AI should be transparent about it; test it thoroughly and transparently; and comply with applicable laws, including in relation to data protection, privacy, and intellectual property.

To assist in this endeavour the concept of sandboxes can be brought positively to bear. We have seen the success of the fintech regulatory sandbox, replicated in well over 50 jurisdictions around the world. I believe a similar approach can be deployed in relation to AI developments and, if we get it right, it could become an export of itself.

Building on amendments I put forward to the recently passed Financial Services and Markets Act 2023 the AI Bill also proposes a general responsibility on every business developing, deploying, or using AI to have a designated AI officer.

The AI officer will be required to ensure the safe, ethical, unbiased and non-discriminatory use of AI by the business and to ensure, so far as reasonably practicable, that data used by the business in any AI technology is unbiased.

Turning to intellectual property (IP), I suggest that any person involved in training AI must supply to the AI Authority a record of all third-party data and IP used in that training, and assure the AI Authority that they use all such data and IP by informed consent. We need to ensure, as it is in the non-AI world, that all those who create and come up with ideas can be assured that their creations, their IP, their copyright is fully protected in this AI landscape.

Artificial Intelligence Public engagement

Finally, none of this is anything without effective, meaningful, and ongoing public engagement. No matter how good the algorithm, the product, the solution, if no one is “buying it”, then, again, none of it is anything or gets us anywhere.

In 2020 our Lords Select Committee on Democracy and Digital Technologies warned that the proliferation of misinformation and disinformation would “result in the collapse of public trust, and without trust democracy as we know it will simply decline into irrelevance.” The mainstreaming of AI tools may well only accelerate this process, but we have the opportunity here to use the same technology to engage the public in a way that builds trust.

To this end, it is essential that the Authority implements a programme for meaningful, long-term public engagement about the opportunities and risks presented by AI; and consults the general public as to the most effective frameworks for this engagement, having regard to international comparators.

As everyone has now descended safely from the government’s AI Safety Summit, perhaps we are left to ponder what emerged. For me, as important as anything is the fact that it shone a light on something truly worthy of our national pride.

Two generations ago, a diverse team at Bletchley Park gathered at one of the darkest hours in our history. Together, they developed and deployed the leading-edge technology of their time to defeat one of the greatest threats humanity has ever faced. We sadly face similar challenges today. If we get it right, human-led AI can once again defeat the darkness and enable so much light. I hope that, through mass support, my Bill can play its small part in enabling the opportunities while staring down the risks.

Lord Chris Holmes of Richmond is a member of the House of Lords, where he sits on the Select Committee on Science and Technology. He is also a passionate advocate for the potential of technology and the benefits of diversity and inclusion and is co-chair of parliamentary groups on fintech, artificial intelligence, blockchain, assistive technology and the 4th Industrial Revolution. An ex-Paralympic swimmer, he won nine gold, five silver and one bronze medal across four Games, including a record haul of six golds at Barcelona 1992.

Read More

AI Governance

Mapping AI Ethics Landscape in United Kingdom

AI Ethics examines the moral and ethical implications of AI systems and their use in a responsible manner



ai ethics

Mapping the AI Ethics Landscape and Authorities in the UK: An Introduction to the Key Issues and Considerations in AI Ethics, from Bias and Fairness to Transparency and Accountability


AI Ethics involves examining the moral and ethical implications of AI systems, ensuring that they are developed, implemented and used in a responsible and fair manner. This article aims to provide an introduction to the key issues and considerations in AI ethics, ranging from bias and fairness to transparency and accountability.

In the fast paced world of technology, artificial intelligence (AI) has emerged as a powerful force with the potential to revolutionise various industries. However, with great power comes great responsibility and that is where AI ethics steps in.

The Landscape and Authorities

The scope and importance of AI ethics cannot be overstated. As AI systems become more prevalent and influential in our lives, it is crucial to address the ethical implications they pose. The Office for Artificial Intelligence, a division of the UK government, has played a vital role in shaping AI policy, recognising the significance of this field in ensuring the responsible development and use of AI technologies.

To understand the current landscape of AI ethics, it is essential to delve into its historical context and evolution. From the early days of simple algorithms to the complex neural networks of today, AI has come a long way. Milestones such as the development of ethical guidelines and the establishment of institutions dedicated to AI ethics have significantly influenced our understanding of the ethical implications of AI.

One of the most pressing issues in AI ethics is bias and fairness. AI systems are built on data, and if that data contains biases, it can lead to discriminatory outcomes. Defining bias in AI and examining its impacts on various societal sectors is crucial to ensure fair and unbiased decision-making. The Ada Lovelace Institute has been at the forefront of advancing strategies for fairness in AI, providing valuable insights into this important aspect.

Transparency is another key consideration in AI ethics. It is vital to understand how AI systems make decisions to build trust and ensure accountability. However, achieving transparency in AI systems poses several challenges. The Alan Turing Institute, a renowned research institution, has been actively exploring these challenges and working towards creating more transparent AI systems.

In the realm of AI ethics, accountability and responsibility play a vital role. Determining who should be held accountable for AI decisions and understanding the legal and ethical responsibilities involved is crucial. This ensures that AI is used in a manner that prioritises the well-being and rights of individuals and society as a whole.

As AI systems rely on vast amounts of data, privacy concerns are of utmost importance. Striking the right balance between AI advancement and user privacy is a delicate task. The Information Commissioner’s Office (ICO) provides guidelines on data protection and privacy in AI, offering valuable insights into protecting personal information while leveraging AI technologies.

While ethical considerations are essential, AI also presents numerous opportunities for social good. AI applications can be harnessed to address societal challenges and deliver positive impacts. However, it is crucial to ensure that these applications are designed and implemented ethically, taking into account potential unintended consequences. Ethical considerations should guide the development and implementation of AI for social good.

The field of AI ethics is not limited to national boundaries; it requires a global perspective. Different cultural and regional approaches to AI ethics exist, and understanding these perspectives is crucial for fostering international collaboration. The Nuffield Council on Bioethics has been a proponent of international collaboration, emphasizing the need for diverse voices and perspectives in shaping AI ethics.

Looking to the future, emerging trends and challenges in AI ethics need to be anticipated. As AI technologies continue to evolve, new ethical dilemmas will arise. Education and policy have a crucial role to play in shaping the future of ethical AI, ensuring that developers, users, and policymakers are equipped with the necessary knowledge and tools to navigate the ethical landscape.


In summary, mapping the AI ethics landscape is vital for ensuring the responsible and ethical development and use of AI technologies. This introduction has explored key issues such as bias and fairness, transparency, accountability, privacy concerns, opportunities for social good, global perspectives, and future trends.

It is essential to integrate ethics into AI development to safeguard against unintended consequences and to create AI systems that benefit society as a whole. As technology continues to advance, the ongoing journey of ethical AI reminds us of the importance of continuously evaluating and addressing the ethical implications of our rapidly evolving technological landscape.

Continue Reading

AI Governance

AI Governance: What are the Latest Global Initiatives?

Learn about recent AI governance initiatives in the European Union, United Kingdom and worldwide.



ai governance

Why AI Governance?

As Artificial Intelligence rapidly evolves with increasing power, there are serious concerns about the potential downsides like job losses, bias, discrimination and misuse. AI Governance aims to mitigate these risks and ensure AI benefits everyone. Lets explore some of the latest developments here.

AI Governance in European Union

The European Union’s recent AI Act aims to regulate the development and use of AI, with a focus on high-risk systems. It requires more transparency in how AI models are developed and holds companies accountable for any harms resulting from their use. The Act mandates that companies must assess and mitigate risks, ensure system security, and report serious incidents and energy consumption. Notably, the EU is also working on the AI Liability Directive, which would enable financial compensation for those harmed by AI technology​​.

AI Governance in United Kingdom

The UK has taken a more hands-off approach compared to the European Union. The UK, home to significant AI research and development, including Google DeepMind, has indicated that it does not plan to regulate AI in the short term. However, companies operating in the UK will still need to comply with EU regulations if they wish to do business within the European Union.

This situation reflects the “Brussels effect,” where the EU’s regulatory standards tend to set a de facto global standard, as seen previously with the General Data Protection Regulation (GDPR). The UK’s approach suggests a balance between fostering innovation in AI and the need for regulatory oversight, with an eye on developments in the EU and other regions​​.

The UK’s approach to AI governance as of 2024 is encapsulated in the Artificial Intelligence (Regulation) Bill introduced to the UK Parliament. Key aspects of this bill include:

Creation of an AI Authority

This body will evaluate the regulatory framework’s effectiveness in fostering innovation and managing AI risks. It will engage in horizon scanning, collaborate with the AI industry, accredit independent AI auditors, and educate businesses and individuals about AI. It will also align with international AI regulatory standards​​.

Appointment of AI Officers

Certain organizations will be required to appoint an AI officer, responsible for ensuring safe, ethical, and unbiased use of AI within the business. This includes guaranteeing that data used in AI technologies is unbiased​​.

Reporting Requirements for Third-Party Data and IP

All parties involved in AI training must submit detailed records of any third-party data and IP utilized during training to the AI Authority. Entities providing AI-based products or services must inform customers about any health risks, include explicit labels, and offer opportunities for consent​​.

Principles-based Regulatory Approach

The UK’s regulatory framework proposed in a White Paper is underpinned by five broad principles: safety, security, robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. This framework is expected to be issued on a non-statutory basis initially, with a statutory duty on regulators to have “due regard” to these principles in the future​​.

Empowering Existing Regulators

Instead of creating a new AI regulator, the UK Government plans to support existing regulators to apply these principles using their available powers and resources. This approach aims to provide clear and consistent guidance for businesses operating under multiple regulators​​.

Centralized Function Support

The UK Government proposes creating central functions to support the AI regulatory framework, including developing a central monitoring, evaluation, and risk assessment framework, and offering a multi-regulator AI sandbox​​.

Focus on Generative AI

The UK Government plans to clarify the relationship between intellectual property law and Generative AI and to establish a regulatory sandbox for AI innovations covering multiple sectors​​.

This bill is at its early stage in the legislative process and is subject to change following consultations. However, businesses should be developing robust AI governance programs to ensure responsible development, deployment, and use of AI systems in anticipation of these regulations.

AI Governance in China

AI regulation in China has been more fragmented, with individual legislation for different AI applications (e.g., algorithmic recommendation services, deepfakes, generative AI). However, China plans to introduce a comprehensive AI law covering all aspects of AI, similar to the EU’s approach. This law would include a national AI office, annual social responsibility reports for foundation models, and a negative list of high-risk AI areas requiring government approval for research​​.

AI Governance in California

the California Privacy Protection Agency (CPPA) has proposed regulations on automated decision-making under the California Consumer Privacy Act (CCPA). This includes rights for consumers to receive notice and opt out of certain automated decisions. The U.S. Securities and Exchange Commission (SEC) has also proposed rules to address conflicts of interest posed by AI use in financial services​​. The U.S. AI Executive Order calls for testing and reporting rules for AI tools, focusing on cybersecurity and privacy risks​​.

World Health Organisation

The WHO released guidance on the Ethics and Governance of large multi modal models (LMMs) in healthcare, outlining over 40 recommendations for governments, technology companies, and healthcare providers. These guidelines focus on the appropriate use of LMMs to promote health and protect populations from potential risks​​.

Global Efforts

These developments reflect a growing global effort to regulate AI technologies, addressing ethical, legal, and societal concerns. The EU’s proactive stance is particularly influential, potentially setting a de facto global standard for AI governance, similar to its impact with the General Data Protection Regulation (GDPR).

More than 37 countries, including India and Japan, have proposed AI-related legal frameworks. The United Nations has established an AI advisory board to create global agreements on AI governance. The Bletchley Declaration, signed by representatives from the EU, U.S., U.K., China, and other countries, emphasizes trustworthy AI and calls for international cooperation​​.


In summary, AI governance is becoming increasingly important as artificial intelligence continues to advance. The rapid evolution of AI brings immense opportunities but also serious concerns about job displacement, bias, discrimination and misuse with different parts of the globe including the EU, United Kingdom, China and United States taking different approaches to regulate and address these concerns.

The EU is leading the way with the AI Act, which focuses on high-risk AI systems, transparency, and accountability. The UK is adopting a principles based regulatory approach with the Artificial Intelligence (Regulation) Bill, emphasizing safety, fairness, and accountability. China is working on comprehensive AI legislation, and the United States is proposing regulations for automated decision-making and AI use in financial services.

Clearly there is much in motion but even more so on the way with global AI Governance development in 2024.

KASBA.AI is an Expert hub for AI tool reviews, latest news, AI governance and learning resources on ChatGPT Store.

Continue Reading


Can Artificial Intelligence Replace Humans?

In a world dominated by economic value and increased automation, there is a growing worry about whether AI will replace humans.



Can Artificial Intelligence Replace Humans? An Engineering Perspective

In a world dominated by economic value and increased automation, there is a growing worry about whether Artificial Intelligence will replace humans. Yet, many believe that instead of taking jobs away, AI is transforming how we work, unlocking human potential and changing how we innovate and boost productivity. In engineering, it’s crucial to identify roles susceptible to AI and automation, as well as those resilient to change. In this piece, we’ll navigate the dynamic landscape and dive into the impact of artificial intelligence on engineering and related disciplines.

AI Robots Are On The Rise

It was expected that robots would replace low skilled labour, particularly in monotonous and dangerous tasks on factory floors. In reality, human labour remains more cost effective than investing in purchasing and programming robots for most facilities. In addition to robotics hardware, the cost of training is substantial: every time you make a change to the process—traditional robots must be re-trained.

Only in large-scale production, such as smartphone assembly, has robotics become practical due to the high volume. A big breakthrough is on the horizon, though: The latest robotics systems with computer vision and artificial intelligence can train themselves and follow generic commands in natural language. When you can “ask” a robot to separate red “things” and green “things” in plain English, robotics automation has tremendous potential.

AI Algorithmic Copywriting

Copywriting became popular because people realized that persuasive content makes a big impact in grabbing the audience’s attention. Whether it’s on websites, press releases or various media platforms, effective text plays a vital role in conveying official information and engaging with potential customers.

Presently, the work of copywriters may be greatly facilitated by artificial intelligence. While it won’t disappear entirely, AI may empower engineers with specialized knowledge to write compelling articles without hiring other people. AI cannot completely replace copywriters, as the importance of high taste and the quality of the text are crucial factors that AI may struggle to replicate. Nonetheless, the significance of particular knowledge in specific domains is also starting to emerge.

AI Designer

While graphic designers are all-in on adopting AI tech, the realm of Industrial Design clings to manual processes. However, it doesn’t imply that industrial designers are barred from, or should refrain from, the power of AI. For instance, they can extract valuable insights by hiring AI to generate multiple product concepts faster. Alternatively, they can task AI to generate a substantially broader range of product sketches, enhancing the exploration of design possibilities.

At present, AI-generated product renders often fall short of perfection or don’t account for manufacturing limitations. Nevertheless, continuous refinement through iterative prompts is feasible. This process might result in renders and sketches at a reasonable pace, potentially faster than starting from scratch, yet it may not revolutionize the field. While generative AI can assist less skilled designers and expedite the design process, high-end professionals rely on their processes and creativity.

AI 10x Engineers

The hot conversation in Silicon Valley revolves around whether AI can replace large software engineering staff. Big tech companies hire tens of thousands of engineers to write generic and not always ground breaking software. Should programmers be worried about their jobs? It depends.

Website designers are at risk: AI tools can create great-looking web pages with simple prompts. Further customization, such as changing fonts or adding buttons, is even easier than asking your programmer friend.

More advanced systems are developed by hundreds of programmers. Following the 80/20 rule, even before AI, some key members created the most value. Who is a “10x engineer?” A person who can write ten times more lines of code than an average programmer. With AI, 10x engineers can drive even more value. Shall we say 100 lines of code? This way, a team of 10 programmers, together with an AI copilot, a system that engineers can “ask” to write a piece of code—will do more than their entire organization did before.

AI Electronics Engineers

Electrical engineers develop physical products. Every headphone or camera developed in the U.S. was designed by a team of engineers, each earning $150,000 or more per year. This drives the total development cost into several million dollars. Can AI design a new product for a startup? The short answer is NO.

However, it can augment and expedite development. For instance, AI could generate a diagram outlining the major components of a specific device. Engineers spend a lot of time manually selecting components and discussing them with manufacturers. An AI co-pilot may compile a list of the top 10 manufacturers, find their contact info and draft email requests for pricing and documentation. AI can also help perform calculations and solve math problems. After component selection, engineers connect all parts on the Printed Circuit Board (PCB). While it remains a mainly manual process, CAD software is incorporating more and more AI tools to expedite the development.

AI Mechanical Engineers

The development process unfolds as our designer crafts the initial design, and our electronics team translates it into a functional PCB. At a certain point, the design is handed over to Mechanical Engineers, who transform it into a manufacturable enclosure. Integration of electronics and mechanics is a meticulous, hands-on affair. Currently, no AI software exists to seamlessly handle the complexities of developing mechanical devices. Even sophisticated tools fall short, and the integration process remains a manual craft. The limited AI involvement may extend to highlighting potential conflicts or identifying areas where design aesthetics could be improved, but the bulk of the work is done by skilled hands.

AI No One Likes To Dwell On Limitations

Our world is and should be evolving, but technology is far from being able to fully replace highly skilled engineers. Key qualities such as creativity, effective communication and the ability to devise innovative solutions remain invaluable. While AI can complement and enhance certain aspects of work, it is unlikely to completely overshadow the expertise and capabilities of skilled specialists. The reliance on the human touch remains irreplaceable.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives.

Read More

Continue Reading

Latest Reviews

ai tools ai tools
Reviews3 days ago

Top 10 AI Voice Generators Entering 2024

We have entered a new era of digital narration gone are the days of robotic and monotonous speech

chatgpt chatgpt
ChatGPT4 days ago

ChatGPT vs Gemini vs Copilot

Introduction When it comes to writing, researching, or coding there’s no shortage of online tools promising to make your life...

canva canva
AI Tools6 days ago

Canva: The Ultimate AI Graphic Design Tool for Stunning Visuals

Overview Canva is a renowned AI powered graphic design tool that allows users to create stunning visuals and graphics effortlessly....

ai tools ai tools
AI Tools1 week ago

Tickeron AI Powered Trading Simplifying Investment Decisions

AI powered stock trading tool that simplifies decision making with real time market insights technical analysis and trading signals

ai tools ai tools
AI Tools1 week ago

Quantconnect: AI Powered Trading Tool for Automated Strategies

An AI powered trading tool offering strategy automation and extensive features

Synthesia Synthesia
AI Tools1 week ago

Synthesia: Great #1 Contender in AI Text to Video Generation?

Overview Synthesia is a powerful Artificial Intelligence voice and video generator tool that allows users to create hyper-realistic videos with...

storychief storychief
AI Tools1 week ago

StoryChief Eliminates 5 Tools in Content Creation

Looking to streamline your content creation and distribution efforts? Discover StoryChief, the all in one platform for bloggers and content... grammerly grammerly
AI Tools2 weeks ago

Grammarly: An Essential Writing Tool for Students, Teachers and Writers

Improve your writing check for grammar correct errors enhance vocabulary and detect plagiarism

ai tools ai tools
AI Tools3 weeks ago

Prisma Transforms Photos with AI Powered Artistic Filters

Overview Prisma Labs is a leading technology company specialised in artificial intelligence and computer vision. With their flagship product, the...

ai tools ai tools
AI Tools2 months ago

Explore Artbreeder, the Innovative AI Image Creator

Discover the innovative AI application, Artbreeder, that combines existing artworks to create unique images. Learn about its features and benefits...


Disclosure: KASBA.AI participates in various affiliate programs which means we may earn a fee on purchases and product links to our partner websites. Copyright © 2024 KASBA.AI