Breaking News

AI Startups Finally Getting Onboard With AI Ethics And Loving It, Including Those Newbie Autonomous Self-Driving Car Tech Firms Too


Fail fast, fail often. Whatever you are thinking, think bigger. Fake it until you make it.

These are the typical startup lines that you hear or see all the time. They have become a kind of advisory lore amongst budding entrepreneurs. If you wander around Silicon Valley, you’ll probably see bumper stickers with those slogans and likely witness high-tech founders wearing hoodies emblazoned with such tropes.

AI-related startups are assuredly included in the bunch.

Perhaps we might though add an additional piece of startup success advice for the AI aiming nascent firms, namely that they should energetically embrace AI ethics. That is a bumper sticker-worthy notion and assuredly a useful piece of sage wisdom for any AI founder that is trying to figure out how they can be a proper leader and a winning entrepreneur. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

The first impulse of many AI startups is likely the exact opposite of wanting to embrace AI ethics. Here’s why.

Often, the focus of an AI startup is primarily about getting some tangible AI system out the door as quickly as possible. There is usually tremendous pressure to produce an MVP (minimally viable product). Investors are skittish about putting money into some newfangled AI contrivance that might not be buildable, and therefore the urgency to craft an AI pilot or prototype is paramount. As they say, the proof is in the pudding. Many AI startups are all about being heads down and churning out sufficient AI code to make their AI dream appear to be plausible.

In that sense of attention, there is little interest in worrying about AI ethics. The usual assumption is that any Ethical AI considerations can be bolted on once the AI core is showcased. This is a mindset that any semblance of AI ethics is merely icing on the cake and not at all integral to the making of the cake. Sure, those AI ethics are a proverbial nice to have, and if time permits you’ll plant some Ethical AI elements here or there, but otherwise that whole idea of dutifully incorporating AI ethics is seen as farfetched and not a real-world concern.

There is another semi-popular saying that goes along with this tendency. In a street fight, you don’t have time to be fretting about style. An AI startup is grittily fighting to get its AI into existence and struggling to get funding to make it to the next stage of development. Anything else, such as those pesky AI ethics, is presumed as extremely low on the priorities of what needs to be cared about.

I should also add that there is an abundance of AI startup founders that are totally oblivious to the rising tide of calls for Ethical AI. Those that are in their own techie fog are utterly unaware that anyone cares about AI ethics, to begin with. As such, you can decidedly bet that such an AI startup is not going to give any air time to including Ethical AI precepts. This is a veritable out-of-sight and out-of-mind notion of which the AI builders in the startup are probably blissfully ignorant and even when approached about AI ethics they are apt to dismiss it as a wholly fringe factor.

You might be wondering if perhaps those brazen AI founders are right in their way of thinking. Maybe AI ethics can be tossed aside until the time that there is sufficient time for dolling up the AI system. Or perhaps there really isn’t any value in abiding by Ethical AI and that the whole shenanigans about AI ethics are merely one of those momentary fads.

Well, to be blunt, that’s a sure recipe for disaster.

In brief, those that ignore or delay AI ethics are in grave peril of their AI going down in flames and their startup likewise collapsing in a colossal heap. Furthermore, the potential for legal ramifications that linger long after the startup is abandoned will serve as an agonizing reminder that they should have done the right thing at the beginning. Lawsuits targeting the startup, the founder, the investors, and other stakeholders will be a longstanding stink far beyond the closing up of the failed AI startup.

Bottom-line: AI ethics should be a cornerstone of everything an AI startup opts to do, starting on day one and forever after.

Period, full stop.

Overall, in my experience advising AI startups and also based on some recent research studies about AI startups, any founder that is not giving suitable attention to Ethical AI is on a likely sour path and will find their AI efforts floundering. There is an increasing realization by investors that without the right amount of AI ethics being imbued into a budding AI system, the envisioned AI is probably going to be a lousy investment or worse still a miserable legally draining entanglement.

The chances are that AI without an Ethical AI foundation is going to make some ugly and reputational missteps. If the AI is shown to include unsavory biases and inequities, the public outcry is going to be loud and magnified many-fold via today’s social media viral tattle-telling. Not only will the AI startup get into hot water, but the investors are also bound to be dragged into the same mud. Why did they invest in an AI startup that had its head in the sand when it came to observing Ethical AI principles? Investors do not want to be permanently tarnished by the acts of one particular AI startup that they happened to fund.

Keep in mind that some big bucks are flowing into AI startups.

We are in the midst of a euphoric rush toward AI-related anything. According to stats recently compiled on behalf of the Wall Street Journal, Venture Capital (VC) funding of AI startups last year managed to gobble up about $8 billion in funding. This is probably a low-ball estimate since there are many AI startups that stealthily at first hide their AI aspirations, wanting to stave off competition or avoid prodding others into realizing that AI can be used in an ingenious manner being cleverly devised.

An insightful research survey of how AI startups are potentially utilizing AI ethics appeared recently as a Working Paper of the Center on Regulations and Markets at Brookings and is entitled “Ethical AI Development: Evidence from AI Startups” by James Bessen, Stephen Impink, Robert Seamans. Let’s take an instructive look at what they found.

Their study surveyed a bit over two hundred AI-related startups and asked the firms about the awareness and embracing of Ethical AI at their firm. The good news is that about half of the respondents said they do have Ethical AI precepts in-house. That actually is somewhat surprising in that just a few years ago the odds were that a much lower percentage would have been so armed.

Here’s what the study said about examining AI ethics adoption at these AI startups: “We assess these issues by collecting and analyzing novel survey data from 225 AI startups. We find that more than half of responding firms have ethical AI principles. However, many of those firms have never invoked their ethical AI principles in a costly way, such as firing an employee, dropping training data, or turning down a sale.”

As suggested by the pointed remarks, the bad news about AI ethics adoption in AI startups is that they might not be seriously embracing the Ethical AI precepts. It could be lip service.

You know how that goes.

An AI startup is perhaps shamed or coerced into believing in AI ethics, so they do the minimum that they have to do to placate others. Perhaps they adorn their offices with placards that tout Ethical AI. The founder might give lofty speeches about how important AI ethics is. Meanwhile, behind the scenes, the Ethical AI notion is given little attention and possibly even scorned or ridiculed by the high-tech heavy hitters building the AI.

Not wanting to seem overly dour, we can at least herald that somewhat more than half of the AI startups claimed to have adopted AI ethics principles. I suppose we can rejoice in that revelation. Of course, this also implies that somewhat less than half have not adopted AI ethics principles. This invokes a sad face.

We have about half that apparently got the memo (so to speak) about embracing AI ethics, though some portion of them are possibly doing so merely as a checkmark to tout that they are all-in on Ethical AI. Then we have roughly the other half that has not seemingly adopted AI ethics. I’ll be optimistic and say that we can hope that those unwashed AI startups will gradually wake up and adopt Ethical AI and that the AI startups that are loosey-goosey about their already adopted AI ethics will stridently seek to turn the appearance into a reality based on actual Ethical AI practices.

That’s the smiley face view.

Continuing our look at the survey results, here are some of the characteristics of the AI startups that were polled: “Firms in our survey are about four years old and employ, on average, 36 employees. However, almost half of firms have less than eleven employees. Even though the survey was administered worldwide, most of our responses are from more developed countries, with almost 80% of responses from the United States, Canada, and Europe.”

I would almost be willing to assert that the newest AI startups are probably more likely to be embracing AI ethics than the ones that are already a few years into their journey. The so-called “business DNA” of a startup is often decided at its initial formulation. And since the Ethical AI trumpet is now loudly sounding, those AI startups just getting underway are presumably going to be adopting AI ethics. The startups that launched a few years ago might have missed out on the existing alarm bells about Ethical AI. As such, for some of those, the foundational roots of their startup might not readily be uprooted to now incorporate AI ethics.

That last comment is worth some added attention.

You might be thinking that if there are now advantages to embracing AI ethics, the sensible thing to do would be for already existent AI startups to jump on that bandwagon. This might happen.

On the other hand, the tone and mindset of the founder are typically cast in stone and likewise, the AI startup gets somewhat laden with concrete too. It can be very hard for the founder to change their views and similarly to rejigger the startup accordingly. I’ve written extensively about the importance of startup founders being able to pivot as needed, though few know how to do so and regrettably allow their startup to fumble correspondingly (see my extensive analysis of business founders and startup pivots at the link here).

Returning to the study finding that perhaps the adoption of AI ethics by some of the AI startups was not being done in earnest, we ought to examine that matter.

The question arises as to how we can substantiate that an AI startup is in fact abiding by Ethical AI principles. This is a lot harder to establish than it might seem at first glance. One way to figure this out would be to dive into AI coding and see if we can find a programmatic embrace of AI ethics. Likewise, we could try testing the AI system to see if it violates AI ethics. All in all, this is a tech-oriented means of seeking to discern adherence to Ethical AI.

Another equally valuable approach would be to see if the business decisions of the AI startup appear to reflect a declarative belief in Ethical AI. Do the actions of the AI startup match the words of the AI startup as it pertains to contending that AI ethics is vital and demonstrative to the firm?

The survey examined this approach and proffered these findings: “A set of ethical AI principles in and of itself is not important unless firms adhere to those principles. From the survey, we asked firms with AI policies to provide additional information on how adherence to these policies impacts their business outcomes to determine if ethics policies are followed rather than simply being signals to investors. More than half of the firms with AI principles experienced at least one costly business outcome because they adhered to their ethical AI principles.”

The point is that if an AI startup has opted to make sure that the AI ethics rubber meets the road, as it were, there are likely business outcomes of a somewhat costly nature that would have to be incurred. Money talks, as they say. An AI startup that perhaps fires someone for not abiding by Ethical AI precepts or that has to hire someone anew to aid in the AI ethics pursuit is putting their money where their mouth is. This is especially significant for AI startups since they are usually marginally subsisting on the least amount of money and they have to stretch every dollar they can. Any costs that therefore go toward embracing an Ethical AI tenet can be interpreted as a hefty choice and one that signals a strongly held AI ethics belief.

We need to be careful though in leaping to unfair reasoning in such matters. An AI startup from the get-go may have fueled Ethical AI into its entire being. In that case, they might not, later on, have to incur any costly AI ethics-oriented choices. We would therefore be somewhat misleadingly inferring that if they don’t have those subsequent costly actions, they aren’t truly serious about Ethical AI. This is a form of what is sometimes referred to as survivor bias. We only might be looking at the firms that initially skipped the AI ethics foundational laying and giving overdue credence to them when they, later on, attempt to correct their earlier misguided ship.

I’d also like to cover another vital topic about why AI startups are nowadays more so going to be adopting AI ethics.

I had moments ago mentioned that savvy AI investors are now tending to require that startups they are funding have to showcase a strident embrace of Ethical AI. There are other ways in which this arises too. For example, larger tech firms that are crafting AI are quickly adopting AI ethics, especially since they realize that they are a big target for when their AI goes ethically awry. In turn, those larger tech firms are often urging AI startups to also adopt Ethical AI.

Here’s the basis for that thinking.

Many of the larger tech firms will at times decide to buy an AI startup, doing so to augment their own AI or to allow them to more expeditiously enter into a new market that the startup-provided AI opens up for their endeavors. These sizable tech firms have lots of attorneys on their payroll and know well the dangers of getting snagged on using unethical AI. What this all means is that if an AI startup has already done the legwork in embracing AI ethics, the large tech firm wanting to buy them has an easier time deciding to do so.

The contrasting picture is that if an AI startup has failed to embrace AI ethics, they are now a problematic child when it comes to a larger tech firm wanting to buy or invest in the budding entity. Will the potentially unethical AI become the tail that wags the dog? How much money will it take to turn around the AI startup toward a more Ethical AI aspiration? Is the likely delay worth making that investment, or should they instead find some other AI startup that has already infused AI ethics from day one?

An AI founder ought to be thinking about these crucial considerations.

When an AI startup is first conceived, the best way to proceed consists of thinking about a potential growth strategy (and, similarly, a viable exit strategy). You should be harboring all along with the chances that a larger fish will want to gobble you up. Some entrepreneurs like the idea, while some do not and insist on going along on their own. In any case, when a larger firm comes knocking at the door, the temptations of getting a huge influx in cash that can keep your baby going are quite alluring. You should gainfully plan for that day (but no guarantees of it), and ensure that you do not get caught at the altar such that the proposed marriage suddenly falls apart due to a discovery that Ethical AI was not a grounding of your AI startup.

I suppose that is the carrot of sorts, but we also need to consider the stick (i.e., the infamous carrot and stick way of looking at life).

New laws about AI are being rapidly passed, see my extensive coverage at the link here.

The wild things that AI startups got away with a few years ago are going to gradually be codified in law as something that is more clearly stated now as being categorically unlawful. If an AI startup is closely adhering to AI ethics, the chances are they are probably also remaining within the rubric of being lawful. This is not assuredly so, but it is certainly more likely and also could be somewhat credibly argued in a legal case as a semblance of how the AI startup earnestly attempted to create lawful AI.

Okay, we’ve then got the carrot of a larger tech firm that might want to buy up or invest in an AI startup, which would be made easier to do if the AI startup was embracing Ethical AI. We also have the stick of new legislation and regulations that are targeting AI and will be a potential legal landmine for AI startups that are not mindful of AI ethics. All told, it is a powerful one-two punch that says an aware AI entrepreneur should be mindfully seeking to get into Ethical AI.

The survey generally found this one-two punch to be evidenced via the responses they received: “From our results, it is apparent that many AI startups are aware of possible ethical issues, and more than half have taken steps by providing codified AI principles to guide their firm. However, firms with prior resources, such as data sharing relationships with larger high technology firms and prior regulatory experience with GDPR, are more likely to act on these principles in a material way” (the reference to GDPR is an indication of the European Union’s General Data Protection Regulation which is EU law covering privacy and data protection aspects).

Throughout this discussion about AI ethics, I’ve been assuming that you already have a familiarity with the overall facets of Ethical AI. I’ve also been making some assumptions about AI too. We should explore these matters before we get further into unpacking the AI startup and AI ethics conundrum.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s also make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase the AI startup and AI ethics circumstances. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI startups and Ethical AI, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Startups With Ethical AI

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and the Ethical AI possibilities entailing AI startups.

A teensy bit of a history lesson is in order.

When the first scrum of AI self-driving car companies got underway there was very little attention being paid to Ethical AI concerns. In those days, the focus was the usual tech wonderment preoccupation about whether or not AI could be devised to drive a car. The thinking principally was that if any AI ethics qualms might arise, there was no particular basis for worrying about it until one could summarily “prove” that the AI being able to do driving was distinctly feasible.

Being overly preoccupied with any AI ethics was akin to putting the cart in front of the horse, some would have suggested at the time.

Furthermore, there wasn’t quite as much awareness about how AI could be shaped such that it was indeed performing in unethical ways. The assumption generally was that an AI developer would need to go out of their way to purposefully craft AI that was not of an ethical caliber. Only evildoers that were deviously constructing AI would do that.

You might know that eventually the realm of AI Ethics inevitably caught up with those making and fielding AI-infused autonomous vehicles. Various foul acts by AI driving systems (as poorly conceived or as at times half-baked deployed) resulted in a realization that something needed to be done. I have chronicled in my many years long ongoing column the gradual Ethical AI changes that took place, including that most if not nearly all of the bona fide self-driving tech firms ended up putting in place top-level executives responsible for infusing AI ethics throughout their firms (see my coverage at the link here).

That being said, I do not want to leave the impression that the Ethical AI matter is somehow fully put to bed in today’s AI self-driving car entities. I assure you that we still have a lot of ground that needs to be covered. For example, as I have insisted numerous times, there is still to this day a lack of proper attention being given to the AI ethics issues as spurred by the famous or infamous Trolley Problem. For details about this unresolved matter, see my discussion at the link here and the link here.

One thing to also realize is that by and large most of the self-driving car entities are now no longer reasonably labeled as AI startups. They are far beyond the startup phase. Some of them grew on their own. Some of them got bought by a larger firm, such as an automaker or a delivery firm. Etc.

In that manner of consideration, you could argue that they can nowadays generally afford to embrace AI ethics. They are flush with cash and have the ability to run their firms in a less starving manner. They also tend to have greater exposure that if they aren’t abiding by Ethical AI they will get called out for it.

When you have your self-driving cars zipping around on public roadways, any AI ethical lapses are bound to get noticed and widely reported. The parent company that owns the AI self-driving car entity is going to take a severe beating in the public eye and potentially take a bad hit in the stock market. For a variety of reasons, the AI ethics weightiness tends to be relatively valued, but of course, this is not necessarily always the case and you can bet that lapses are going to occur.

Does this imply then that there aren’t any AI startups in this niche and ergo no need to discuss their AI ethics adoption therein?

It might seem that way since the headlines are usually about the biggie self-driving tech firms. Unless you perchance are especially paying attention to this niche, you would think that there aren’t any new startups going into this realm. The whole shebang seems to be dominated by relatively established companies and all of the startup activity has either been previously accomplished or cannot get any traction since the market is clogged with today’s overbearing big oak trees.

You would be wrong in making such an assumption about AI startups in this arena. There is still quite a bit of action. The difference is that AI startups nowadays tend to be aimed at a piece of the pie rather than the whole pie. In days past, the AI startups typically wanted to build an AI self-driving car from the ground up, doing everything from A to Z. You would start from scratch with a blank sheet of paper and imagine what someday a self-driving car might be like. That became your marching orders.

The odds of that kind of AI startup being launched today are relatively low. Instead, the AI startups in the self-driving car niche tend to be dealing with particular components or subsets of what AI self-driving cars consist of. This happens quite a lot in the sensor suite facets of self-driving cars. You’ve got a continual emergence of AI startups that concentrate on a particular kind of sensor or a specific AI algorithm to do a better job of interpreting the data collected by a specific sensor type.

Overall, the question of getting AI startups to embrace AI ethics is still alive and kicking in the autonomous vehicles and self-driving cars space.

In advising AI startups all told and including those in the autonomous vehicle realm, I have come up with my favorite tips or recommendations on these matters.

As a taste, here are my decreed Top Ten:

1) The founder has to be or become an AI ethics proselytizer else the rest of the AI startup won’t perceive that the topic is worthy of the available scarce attention and already overworked endeavors (it all starts at the top).

2) A founder is unlikely to fully grasp what Ethical AI entails and ought to get up-to-speed so that their comprehension of the topic matches their passion for it (know what you are talking about).

3) Hollow platitudes by a founder about AI ethics will absolutely undercut the startup teams and turn the Ethical AI banner into a worthless flag absent of any substance (do not poison the waters).

4) Make sure to connect the Ethical AI precepts with specific duties and actions involved in designing, building, testing, and fielding the AI (otherwise an oversized and unbridgeable gap will exist and your teams will not be able to connect the dots).

5) Put in place a sensible AI development methodology that either already has incorporated Ethical AI precepts or that can reasonably be retrofitted appropriately and without undue distraction (a proper methodology will guide the teams seamlessly in the path toward AI ethics fruition).

6) Provide suitable training to the teams about AI ethics and make sure that this is done in an imminently practical rubber-meets-the-road manner (lofty training will do little and likely cause a backlash at seemingly having wasted precious time).

7) Establish a relevant means of rewarding the teams for embracing Ethical AI in their daily efforts, including highly visible recognition by the founder, monetary or other bonuses, friendly scorecards and competitions, and the like (talk the walk, walk the talk).

8) Get a trusted outsider to periodically review the AI ethics adoption, providing an independent set of eyes to see how things are really coming along (get past the internal platitudes and see the world as it really is).

9) Make sure to keep your investors and other stakeholders apprised of your Ethical AI activities and adoption since they will otherwise likely assume that you are not doing anything on it and could inadvertently suddenly go tyrannical when bumps occur in the AI endeavors (be proud and be vocal about your AI ethics pursuits).

10) Keep on top of the Ethical AI embracement since it is never-ending and will need to have continual upkeep and boosting (you snooze, you lose).

I have many more of these AI Ethics organizational adoption recommendations and will gradually be covering them in later postings. Stay tuned.

Conclusion

The glass is half-full and half-empty.

According to the earlier cited survey, approximately half of the AI startup firms polled were adopting AI ethics and thus we can assume that about half were not. We though also know that even if an AI startup said it was embracing Ethical AI, it might not have done so in any substantive manner. It is abundantly easy and convenient to simply pay lip service to the cause.

The pressures arising from demands by savvy investors will undoubtedly boost the pace and proportion of AI startups that willingly or not will opt to dive headfirst into the AI ethics realm. Likewise, the specter of lawsuits and prosecution for criminal acts as a result of violating the newly being enacted AI-related laws will certainly inspire AI startups to get their act together on Ethical AI matters.

Is ignoring or skirting AI ethics ostensibly worth losing your shirt or ending up in a jail cell? I certainly hope not.

The harsh pain of the stick is of course not the only basis for aiming to embrace AI ethics. As mentioned, if you want to grow or possibly eventually exit from your startup, larger buying firms are going to want to see that you’ve suitably got Ethical AI and truly infused it into your AI startup. The payoff of abiding by AI ethics is sensible and enormous. The carrot is good.

As a final comment, for now, we all know that those in the tech field love to tout copiously catchy slogans. Build the future. Push the limits. Be data-driven. Go big or go home.

And just one more needs to be added: Energetically embrace AI ethics.

Live it, make it so, believe in it.



Source link