WiBT (00:02)
Welcome to another episode of Women in Blockchain Talks. And today I'm excited and happy to be speaking with a good friend of mine in the industry, Michael Borelli. Michael is actually a male ally of Women in Blockchain Talks. And for me, it's super important to spotlight and amplify the voices of men who walk their talk and support women and gender diversity, women leadership, and just inclusion.
in the emerging tech, but particularly blockchain space. People can talk the talk, but do they walk the walk? This is the question. So, Michael, welcome to Women in Blockchain Talks podcast. How are you? First and foremost, thank you for having me on the video. It's great to be here and great to be able to support Women in Blockchain Talks. Big fan of your work. So, great to be here. Very well, I think.
It's been a busy month and as we were saying before, September's looking to be exceptional with the EU AI Act now in force. Yes, it came in the 1st of August, correct? That is true. It's now entered into force after more than three years of trials and tribulations and all the stuff that happens in between. Wonderful. So before we dive into that, would you like to just introduce yourself, talk about who you are, what you do and why you do it?
Sure, so my name is Michael Borelli. I work at AI Partners and focus on the regulation and business development aspects. My business partner, Sean, who's the founder and CEO, set up AI Partners three and a half years ago and asked me to join his team. Bit of background on him, he's an ex -tech accountant who worked at KPMG.
I was asked to join a partner program, declined it, managed an IPO in a New York Stock Exchange. So when he saw the proposal of the EU AI Act, he thought this is going be huge when it hits the market because he had experience of implementation of GDPR. So when he asked me to join him to almost the yin and yang, he said, look, we need to, we could build this together, but we've got to start, start early and we've...
We've built it from scratch for over three and a half years. Why do we do it? To be more of a creator or an artisan. I think that's what people refer to myself as. I just like to create and build things. And to be able to bring communities together. Big believe in the ecosystem effect.
hence why we're here today and putting your own mark in the world and hopefully look back in a few years time and think I was part of something that was way bigger than anything I could have achieved myself. I love that. It's always about vision. It's interesting because for me being in the blockchain space, it is about the tech, it is about the innovation. But what comes with the innovation is entrepreneurship and that vision that many entrepreneurs have about creating something that is bigger than them.
and being able to be part of a space where that actually can happen. I mean, of course, there's many industries, there's many skill sets and experiences we all have, and we can bring our individuality to the table. But what I love about emerging tech and the blockchain space is that it is still nascent. Yes, it has its trials and tribulations, to use a phrase you just used, but...
With that being said, there's a lot of opportunities. And I believe that it is a egalitarian space in the sense of that. It isn't looking at your gender or your age or your ethnicity. If you have something to bring to the table, there is no gatekeeper to say that you're allowed to come in and you're not. And with some other industries, it is heavily gatekeep it. So, there is a heavy gatekeeper.
So for me blockchain is an area where one has the opportunity, if one chooses to do the work and has the vision to create something that can have a positive impact for the many and I love that. So with that being said, know,
I am in the blockchain space, you are in the AI space. I know that you know about blockchain and crypto. You have a strong understanding of that. So just talk to us a little bit about why AI and blockchain as two emerging technologies is so important for us to understand and how they intrinsically link together.
Cool, we could be here for quite some time to talk about that. I would say we're moving towards the digital economy. I think at European level, there's a Digital Europe 2030 agenda, which was...
they set out in twenty eighty by us to vandal and and i think we're there's a management of the economy is based on data and it's all part of the digital ecosystem is part of the ecosystem you have the blockchain elements and a i and if you take the best way to think about how they're linked is to look at the that what they call the killer use case for a i which the f t reported this week is software development so a i significantly helps the productivity of software developments helping
the coding of whichever language you use, C++, Java, Python, all that stuff. And blockchain is obviously code based. It's underpinned and written by code. So if you want to update or make build protocols on blockchain or things like this, you need to write code. And AI helps facilitate that. Faster, quicker, and better quality. If you look at control studies done by BCG, for example, saying their consultants can do stuff 40 % without better and more productive.
without a drop in quality. So I think from a utility perspective, the best way to link them is AI helps facilitate the very creation process that supports the blockchain ecosystem. I like that. I mean, that's a very technical and deep answer to the question. And of course it doesn't negate it. But as an entrepreneur, I would say...
that the area that I have seen AI work quite fundamentally with blockchain, with the blockchain and the ecosystem, more importantly, is proof of providence, having that proof of providence. Would you agree? Absolutely.
It's all about, if I understand correctly, provenance about showing where the origin of the data is. For example, if you're storing a record on the blockchain, it's there, it's immutable, it can't be disputed. Obviously subject to the integrity of the blockchain and how the update process works.
AI is about trust. It's about fundamentally, it's about people building what the EU Act calls safe, secure, ethical and trustworthy AI. what your people are worried about, it's a black box. They don't know what data is trained on. They don't know how it's being used. So the EU AI Act effectively installs a risk -based approach to the use, the use, development, deployment and interactions with an AI system. So it is, they're both trust -based mechanisms and trust fundamentally is the most important currency
around the world. Trust in data.
on trusting, you know, who's dealing with our data and where has that data come from that has impacted the AI data set that one is using in a small language model or large language model, which then takes us quite nicely to the EU AI acts. It's a bit of a my fault. And I'd like to know, you know, this is your world now. So what are the key provisions of the EU?
I think you just touched on an element of it. But can you kind of expand on that more? And in addition to that question, you know, with the provisions of the EUAA Act, that will have the most, what do you think, you know, will have the most significant impact on global businesses? Like, why is it so important?
I guess it's important because it goes back to their agenda, they want to support the uptake of trustworthy AI. So AI has obviously been around for some time and they want to make sure that this latest technological boom which is done sustainably and the used development deployment of trustworthy AI continues to proliferate in the market. We're seeing a lot of solutions being built and used and there's a very...
there's a lot of concerns about market integrity. the key provisions depend on the risk classification. A lot of the provisions are applicable to high risk systems. So...
to be a high risk system that's deemed to pose a high risk of harm to an individual's health, safety and fundamental rights. But you could also have an unacceptable system or a specific transparency system or a minimal. the question becomes a question of, okay, what are the key provisions? But the key provisions depend on the risk level. As in, to move further on that, if you're a high risk system...
of the key provisions you would have to have put a risk management system in place, quality management system, you would have to provide transparency of information to users so they could see what the data is trained on as you mentioned before. Article 5 covers things like cyber security, so with the cyber security attack earlier this summer would be in scope. And another good example would be Article 14 which is human oversight, so any use of deployment of AI system has to have
sufficient level of oversight, but the person providing oversight, like those that you probably lots of your clients and people you deal with in the banking sector would have to have the relevant knowledge, experience and expertise in order to perform said oversight. Wow. So it's now live the 1st of August and
We know that it is all businesses within the EU have to abide by this new law. Now, when we look at the AI market, we know the biggest players are China and the US, right?
So I have two questions for you. From the work that you do, are you seeing more of an uptake within the EU in regards to the use of AI? And the second question is, how will the EU AI Act affect companies operating outside the EU by offering services and products within the EU market?
Great question. I've got great questions. If I ask your second one first, because it's top of mind, the regulation is extraterritorial.
What that and similar to GDPR in that sense, what that means is it affects you whether you're based in Europe or outside of Europe. So if you were a business, have you have any presence or nexus or footprint, you would fall in scope. So we've, for example, we've written a white paper which is coming out early next month, which examines the impact on third countries, third countries being anyone, any of those, that organizational country based outside of Europe. So if you're, as you mentioned, China, if they're looking to deploy their solutions on the EU market,
they would have to abide by these rules. And that would include any standalone AI products or any products with AI embedded in them, both software and hardware alike. So that could include anything from a teddy bear toy with an AI feature in it to a Microsoft Copilot being rolled out across various different enterprises. So it is that sort of...
wide and expensive in that sense. We're actually also working with a few clients based in Oman and South Africa that are looking to go into the EU market for that reason. Would you mind repeating your first question, sorry? Yeah, I was saying that when we look at the AI market, we see that a number of companies, the bigger based or the bigger companies, the big boys in the space are US and China. And so what is the outlook for the EU?
because of course we've got this act but if the bigger players are outside of the EU market how much is it going to affect the EU market and what is your kind of forecast of the EU market and business AI businesses? Well it's interesting you say that because a lot of the the ecosystem model seems to be proliferating around where it's very much partnership driven so reselling stuff like this we've been able to get people coming to us for an example saying hey could you help
reseller, AI product, you provide a compliance service and the standard retort we have is, is it you are compliant? And they look quite blind and say, well, they want people to put their products in the market, but we cannot put, resell any software or products that are not you are compliant. Now the solutions that we are seeing being developed here, the ones that have been developing for years, that have had this solution has been built from the ground up with these regulatory requirements embedded in them. So for them, it's business as usual, whereas
there's new products coming to market because they see that it's, you know, it's a new market, new opportunities, as you said before. The regulation provides...
a significant barrier to entry for them, which is great because it makes sure that any solutions that are on the market are of the sufficient quality and trustworthy. But would you say that with the EU AI Act coming to force and bringing this level of quality, but yes, it's a restriction, it's a regulation, or some people will see regulation as a restriction, do you think it will slow down innovation in the space?
I'm quite prejudiced as you can probably imagine with this answer. So obviously with the regulation being the core of what we do, I'm pretty optimistic in this. I don't think it will slow down innovation. I think that yes, it will enforce things to be taken into consideration, but then those solutions that are put on the market will be of such high quality that you're only going to... the quality of the products mean that those...
Solutions are more likely to be innovative. It depends on what your definition of innovative is of course, but By making sure that these very rigid standards apply It'll speed up the innovation of those products that are Trustworthy and compliant to whereas it may slow down the innovation of products that are very quick Sorry, very quick to build in our cutting corners. I think the old the adage of fail fast fail quickly
is going to probably apply to the state of innovation for non -trustworthy products. Yeah. And we're going to touch on this in a moment, but the fail fast fail early cannot work with AI because it's such a powerful tool and it's based on...
how we think as human beings, society, and that effect it can have on society and the way that it is being used across the board and people are applying it to their everyday work, personal life. And so that sort of laxadaisical mindset around ethics, if you really wanna break it down,
and how we communicate with each other, we do have to have regulation in place. We do have to have guardrails in place. I agree with you. Eric Smits, the ex -CEO of Google, he was talking about this and he was just saying that, you know, we will get to the point maybe in the next five years or so.
five to 10 years where AI is talking to AI and it's automated and they're making decisions and human beings have kind of taken their thoughts off the pedal because obviously we're creating all of this technology to make our lives easier. To make our lives easier for what we will fill it with, in or with, only time will tell, right? Right.
But ultimately, if the AI agents are talking to each other and there's been no safe rails or guard rails put in place, then we will lose control of this technology and the impact could be quite detrimental. Yes. I think you've touched on a few points there. I think the one of the biggest risks that springs to mind there is that if AIs are talking to each other and one has biased input, you're effectively going to have
circulation of, there'll be no way to flush out bad quality information. So if one AI is putting another, you can have bad information recycled and put into another, and that's gonna train that. And then you perpetuate these negative feedback loops. And that could have that alone, from an architect standpoint, probably very, it's gonna cause a lot of people a lot of issues.
The societal impact you say is phenomenal. One of the very objectives of this regulation is to safeguard people's health, safety and fundamental rights. Actually the European Charter Fundamental Rights is embedded within the regulation. So the right to dignity, the right to free speech. And if AI...
is not regulated and is not abided by these rules, there's a significant risk of those rights being violated on a mass scale. Yeah. Yeah.
And I mean, we're going to talk about I mean, women in blockchain talks is all about diversity and inclusion and particularly gender diversity. So we will touch on that in a moment because, you know, we do know there are obviously problems of bias in in a eye because it's created by humans who have biases. But just to touch to just to continue on the on the vein of.
AI regulation and the free market, which when one talks about the free market is usually the USA. But from your understanding and also from being in this space, where does the US and China
if you are aware of this, if you can answer this question, where do they stand on regulation? And I ask this question because as we've touched upon, the EU AI Act is now alive. And off the back of that, Metta has prohibited certain AI elements of their products from being launched in the EU and also Apple. So they have some AI features, some new AI features, and they've said that
can't, it's not allowed to be utilized in the EU region. So what do you understand or know or believe is going to be the regulations of these other countries, the US, China being the biggest ones?
I think both countries have taken a stance that they're going to put their own regulations in place. think off the back of that, I every country globally recognizes that they can have their own framework. think California's actually put through a bill in the past few weeks, which has actually caused quite a lot of controversy. So that's being put through the legal process. And we'll see how that pans out. I believe China has taken a...
considering their own legislation, and that's going through the legislative process there.
we take the AI partners, we take the position that AI regulation is gonna stem from the EU AI Act and it's gonna be the global blueprint in the same way as GDPR was for data. I like to draw on an adage from Sean. So when he said, if data is the new gasoline, then AI is the combustion engine. AI processes and uses data. So GDPR is a regulation of data and all the infrastructure is built around there to support that regulation.
We are now regulating AI. So I agree with Sean with his view on that and how the rollout of the EWAC is likely to follow a similar trajectory to GDPR. But there's plenty of people who probably disagree with this view. And that's OK. mean, but the point is we know there are adverse data.
and people are utilizing data for their data sets. So how do we combat that? You we have to deal with the problem that is at hand. And then, of course, even when you have a solution, there's other problems that can arise from that. And I think a lot of people do really need to appreciate and understand this is a nascent space. And this is the reason why I'm so, you know, I strongly advocate for all diversities and all genders, of course, to get involved.
we all need to be part of the solution rather than just relying on one demographic to continue to lead, even though, that demographic, they generally have the funds to be able to implement.
In going back to what Eric was saying in regards to the development of AI and obviously it's a fast pace space industry. where do you see AI in the next five to 10 years? How do you see it developing? Before I answer I'll just...
Let's draw on what you said that I think the diversity inclusion agenda is very important and the role of the women in blockchain talks in supporting that is important. I know you're a top voice on LinkedIn and I think people look to you for inspiration and guidance. I think you do a great job on that.
With respect to seeing AI over the next five, 10 years, who knows? think the five -year plan of businesses, I think you can really look forward with a high degree of confidence for six months. And I see the space developing exponentially. I think we're gonna see a lot of innovation of trustworthy products, a lot of fast -paced developments.
because people see that there's opportunity. I think we've had a lot of hardship over the past few years, people are looking at this with some merits similar to the 90s and the growth of the internet.
And because this is a general purpose technology which Mustafa Suleiman referred to, it can be used for quite literally everything. So the use cases are going to be across sectors, sub -sectors, business functions, across country basis. You're going to have innovations, iterations of even one product. So can have one product that's iterated in thousand different ways and each way is, to be considered in its own right. So I think the pace, probably like what you know is a Merkle tree in the blockchain.
train talks where just every leg doubles. forking. Exactly.
So I'm excited. It's a very interesting place to be. Very lucky, very fortuitous to be here. And I think this is a really special time in history because we're at the, not only the start of the AI governance ecosystem and the infrastructure, but the start of the AI revolution that even Alan Turing himself could probably look and say.
Yeah, this is something which can benefit humanity. Wonderful. I like that sort of inspirational and kind of top bird's eye view statement. So obviously we know that this act is in place and this is, as you said, the beginning of regulation for AI. I mean, it's already been percolating, but this is the first one that's really out there and...
and official so to speak. So what steps should companies take to ensure compliance with the EUAA Act? And what are the hefty fines for non -compliance? What happens if people don't follow through? The first thing is it's important to know what you have to do.
The main problem we see at the moment is companies don't know what AI is in line with what the regulation says it is and how many AI systems.
So we would ask people to follow the KYAI process, know your AI system. So in traditional banking, you have KYC, know your clients. So you work for your open bank account or something like this, you are subject to due diligence and you're assigned a risk rating and then the bank knows how to deal with you. The same process applies to an AI system. Now in order for a company or enterprise to know what,
obligations they have they'd have to undergo a KY AI so get a Do a diagnostic or identify all the AI systems that they use or deploy so getting a list of inventory whether that's In the thousands tens of thousands millions because it covers every model system algorithm You would have to do that and then risk classify each one That will then determine the risk level and the risk level is the key the risk level of every system drives your regulatory requirements So I think that's what they should do
and A partners has been working extensively to help companies with the AKYA process. For the fines, they vary. think similar to GDPR, they're quite hefty. The first one that springs to mind is the breach of Article 5, which is prohibited systems, which actually applies from the 2nd of February, 2025, so not long to go in that sense. breach of that in...
means the company can be fined 35 million euros or 7 % of global annual turnover, whichever is higher. So I think there's a very short time window for companies to complete the KYAI process, identify all their AI systems, categorize each one of them, and then take the remediation action for that, followed by governance. So I think the three steps, KYAI, which is...
Step one all the scoping step two remediation deal with it subject to the firm's risk tolerance and things like this Step three governance, which is the business usual monitoring and all the all that stuff which
probably a lot of firms in your ecosystem are very familiar with doing. So here's a question that sounds costly, like there's someone who has to audit, someone who has to provide some sort of quality assurance, would that be in the form of a certificate? How does a business, if a business is paying for that, how do they guarantee that once they've had their audit and their checks, that it's going to
for how long? Is it up until they add more data or they purchase more, purchase an AI platform from someone else, not their own bespoke or proprietary platform but you know most people that I know
in business they're using ChatGTP as their base and then they're something bespoke on top of it, so to speak. So whose responsibility does that lie with? If you're choosing as a business owner to use ChatGTP and there are...
GAND, so Generative Adversial Network, Grimlins, biases in that, then who's responsible for that? Is it ChatGTP or is it the business owner that is based in the EU?
Two ways to answer that question and the best thing to do is to remind people that the E -Ray Act regulates the use, development, deployment of AI systems amongst other things and it is supported by the AI liability directive which is a set of pieces of legislation undergoing the or going through the legislative process at the moment which will handle liability.
So whether it rests with the deployer in your example, OpenAI for chat GPT or the deployer, i .e. the user, so any company using it. Now, is to step back everyone along the value chain has a responsibility. So the key players in that sense, there are about seven, so you have a provider, so the person developing an AI system, a third party providing tools to the provider, an authorized representative, so
they've been given a mandate by the provider to comply with all their obligations similar to an appointed representative in financial markets, deployer, anyone using an AI system, an importer, someone putting it on the market, and a distributor, someone placing it on the market, and an operator, is a term used to cover three or more of those. Specifically, anyone who's not a third party.
Now, everyone along the value chain has a responsibility. So in terms of liability, could, if someone were to, for example, place an AI system on the market that they knew was high risk and hadn't been...
undergone a conformity assessment, they could also be liable because they need to know what they're dealing with. Equally, if you're providing a toolkit to an unacceptable risk system, you could also be arguably held liable because there'll be a contractual arrangement and you should know what those tools are being used for. This is all obviously subject to legal interpretation and as we are at the very, very start of the implementation of the EU AI Act, you're going to see a lot of nuances and the landscape will become apparent over the next coming weeks and months,
which could make some lawyers very happy and some businesses maybe not. But it's going to be...
it's going to be interesting to see, we're at the infrastructure building phase and once all the stuff is out there and then businesses can go about it and it'll function in the same way that the markets you and you deal in as well. And the same way GDPR has been implemented and people. Right. Exactly. All right. So you was talking, I did ask you the question about five years from now, you was talking about how there will be different, you know, iterations of the same
and products and platforms and what have you. And I'm talking more about the EU region rather than just globally, but you can answer for the global market as well. So right now we do see that, know, ChachiTP, they're the leaders, there's Chain, GPT, I believe, and a lot of people do build on top of them. So do you see companies creating, moving more towards
creating their own bespoke or proprietary AI LLMs or SLMs rather than relying on the bigger boys and their platforms and if so why?
yeah, think we, first and foremost, yes, I think you're seeing a lot of people are likely to do that. And it's very likely the reason why they probably won't have control over the data, the development process, and making any adjustments, and probably to be more, maybe more cost effective. There are plenty of examples of people who built their own large language models. I believe Bloomberg have their own, and they're just down the road.
and others are learning how chat GPT works and other large language models and then building or iterating on top of that. To use an analogy there, I believe people built the Bitcoin blockchain was then iterated or you had...
Ethereum was built and you had various other network base layer chains were built and then you had layer two and so you see the way that the crypto ecosystem grew almost like a branch is on a tree and you're seeing the same thing here you're gonna see maybe to use a comparison maybe chat you between large language models are your Bitcoin and then everything will grow from there who knows is all these
innovations by some of the most amazing minds in this country and worldwide are going to bring to the market and to be able to facilitate and help those extraordinary entrepreneurs is a true privilege. Okay. I mean, I do believe, I do think that we're going to see more proprietary LLMs where people are going to just use their own data to create their own platforms because
With the risk now with these regulations, people are going to have to be weighing up the options of do I buy a white label sort of thing and just utilize it without really fully understanding the liability that comes with that and the unresponsibility of...
who is responsible for checking that the data is correct. And if it isn't, then who's going to be responsible for the fines, so to speak. And so I do think there is a business opportunity there for companies such as yourselves, other...
auditing, algorithmic auditing companies and data creation, data curation, as well as coders and builders who can just provide because there's builders or developers who just will create the large language model platform, but they won't actually curate the data. So would you say that's a correct assessment of of the sort of
not technicality of large language LNM, large language models or proprietary process, but developers don't necessarily create the data, correct?
People are going to have developed different specialisms and have different involvements along the AI life cycle. Some may get involved in model training, some may get in data collection, some may get involved in deployment. You may want to have an allocation of different roles and responsibilities to prevent a single point of failure. There could be a technical issue, they may not have the technical capacity to handle those and that's fine. There's plenty of opportunities and contrary to popular belief, there are plenty of opportunities in this
new AI age. the the pop somewhat popular rhetoric of job destruction is somewhat tiring in that sense. It's just an evolution, isn't it? an evolution. humanity's greatest strength, as we've seen, is resilience and adaptation. it's survival of the fittest, so to speak. So, yeah, the different roles.
that they have. Yes, some may or may not get involved in the data curation, but it's important that people know what the technology is how to use it to help advance their purposes. And we give people at least the ability to know what it is and take the opportunity. How they use it is obviously at their discretion. But we need to give people as much of helping hand in this new world because if it's not managed and orchestrated correctly, it could...
example, further widen the social divide and inequalities that we see today. All right. So the my last question to you, and this has been a really insightful and great conversation. So thank you, Michael.
How does the EU AI Act address algorithmic bias and discrimination? And what implications does this have for businesses utilizing AI systems now? I think you've touched on it, but I'd like for you to just dive more because biases and discrimination is a huge part of the conversation when one talks about AI and ethics.
Again, it is driven by the risk level of the AI system. So if it's categorized or classified as unacceptable, it's deemed to pose more of a risk to individuals' health, safety, and fundamental rights. So aspects such as bias are inherent in those.
If your system is high risk, you would have to comply with Article 10, which covers data governance. So making sure that the collection use of data and certain extent, sorry, to a certain extent, the cleansing of that data is such that it's not biased. But there is, it goes out saying that everything is biased to a certain extent. Humans, by virtue of their nature, are biased. And we all carry biases with us.
for reason of education background experience and all those things. What the EU Act, if I understand, does to help prevent those is it places safeguards and requirements for providers and other people in the value chain based on the risk level. So it's all, I guess, anchored or centered around that risk level.
because the risk level gives you an indication as to how biased or not, as the case may be, an AI system is and the subsequent risks that they have to people and the types of impact. And that varies from sociological, psychological, physiological, economic. So it goes...
it imposes a lot of requirements. think businesses are probably at the stage at the moment of market awareness, they're getting their heads around it, but they don't have too much time to cogitate on that. You know, some people might say, well, what sorts of biases is in AI? But like you said, you know, by our nature, humans are biased. So of course it's going to show up. But here are some examples. So healthcare, it's underrepresented data of women or
ROT groups can skew the predictive AI algorithms. Then we have applicant tracking systems, so HR, which obviously is a huge part of most people's working lives. issues with natural language processing algorithms, and that can produce bias results. Online advertising, so that's biases in search engine, add algorithms, which can reinforce
job role, gender biases, image generation, and also predictive policing tools. So all of these things touches on all elements, different elements of society. And so if one is wondering how, this is how. And these are the problems that we need to solve. know, AI is a great technology and we need to use it. We need to fulfill its potential, but we need to do it.
with care and with conscientiousness that is inclusive and also looks at diversity from all angles. Yeah, so that's what I wanted to say about that. does...
AI and partners, do you have any events or, I mean, you touched upon a paper, but you do publish a number of documents and papers and you have been. So is there anything that you would like our readers to know about or for them to review, read?
I'd say keep a close eye on LinkedIn. As you say, the team publish quite a lot of things. If you want to know more about the e -Way Act, we do have a training course to get you up to speed. It helps with CPD requirements. And if you're a fan of the white papers the team have done...
There was one coming out in just over a week's time examining third country preparedness, so how prepared third countries and the organizations there are for the EU AI Act. Just a word of warning, it is quite hefty. The white paper is 14 ,000 words, but there is also a condensed executive summary, maybe you want to have your morning coffee.
Wonderful. Thank you so much for taking the time to come in and talk to us about this. I think it's a very important subject. I don't think there's enough awareness or enough marketing going on for companies, small businesses to really understand and appreciate this new AI, EU AI Act. So thank you for coming on so that we can get that information out there and get them protected. Thank you for having us.