Congress hasn’t determined the artificial intelligence behavior it wants to regulate

Congress’ attempt at AI legislation

The 116th Congress has three bills or resolutions of note sitting in a number of committees that attempt to address the use of artificial intelligence in American society.  H.Res. 153 is intended to support the development of guidelines for the ethical development of artificial intelligence.

Meanwhile,  HR 2202 seeks to establish a coordinated Federal initiative to accelerate artificial intelligence research and development of the economic and national security of the United States, while H.R. 827 is concerned about training and retraining workers facing employment disruption by artificial intelligence.

The bills and resolution are more exploratory versus regulatory in nature.  They don’t expressly get at any behavior that needs to be regulated.  The nascent characteristic of artificial intelligence development along with government’s inability to keep regulation at the same pace as technology development may be in part the reason for the “let’s see what we have here” stance of early regulation.  In other words, Congress may just be getting a feel for what AI is while taking care not to interfere with the innovation needed to further develop the technology.

On the other hand, these bills could help Congress get ahead of the issue of labor disruption. Workers read about AI’s capacity to replace jobs that are more pedestrian or mundane; jobs currently occupied by lower income workers.  The prediction about labor disruption won’t leave certain higher-waged jobs untouched either as AI platforms, data mining, and machine learning threaten professions such as accounting or law.  By determining the data needed to thoroughly analyze the impact and growth of artificial intelligence; identifying the industries that will benefit most from or be harmed the most by AI; and comparing today’s existing job skills with the job skills that will be needed in order to work alongside AI, Congress contribute to alleviating labor force disruption.

The Human-Machine Relationship

Unlike a number of dire predictions of the emergence of “SkyNet” and terminator-like machines subjecting humans to slavery or worse, most analysts and commentators see AI as a tool that augments human capabilities, making humans better or more productive.  The emphasis will be on collaborative intelligence, with human and machine working together.  And how well that relationship works depends on how well humans program the machine.

Another consideration is artificial intelligence’s ability to self improve.  The goal of AI development is to build an artificial intelligent agent that can build another AI agent, preferably with even greater capabilities.  The vision is to move from AI’s narrow, single-task capabilities to a more general AI, a concept that sees AI exhibiting greater abilities across multiple tasks, much like humans.

Possible targets of legislation or regulation

If legislation or regulation is to target the machine-human relationship, elected officials and regulatory heads will have to consider their policy initiatives impact on:

  • the ability of humans to train machines;
  • the ability of humans to explain AI outcomes;
  • the ability of machines to amplify human cognitive skills;
  • the ability of machines to embody the the human skills necessary that lead to the extension of human physical capabilities;
  • the ability of artificial intelligence to self-improve;
  • the difference between “safe” AI (the ability to maintain consistency in AI outcomes) versus “ethical” AI (ensuring that AI platforms are not a threat to human life.

Conclusion

Just like the application of artificial intelligence, Congress’ foray into regulation of AI is nascent.  This is the time for AI’s stakeholders to either begin or maintain efforts to influence all levels of government on AI’s use in the commercial sector.

 

Advertisements

As machines become self-aware, will they need privacy law?

As machines become self aware, will they need legal protection? Maybe the question is a bit too far ahead, but discussions regarding artificial intelligence and machine learning had me contemplating the relationship between man and machine thirty or fifty years from now. At this stage I have admittedly more questions than answers but that is what exploration is all about. Given my interest in what I term pure digital information trade where machines are collecting, analyzing, and trading data and information among themselves without human intervention, and the potential for machines to become sentient, I am considering what the legal relationship will be between man and machine. Will man consider the self aware machine an “equal” in terms of sentient rights or will the machine continue to be what it is today: a tool and a piece of property?

What do we mean by “sentient?”

Sentient, according to Webster’s New World Dictionary, is defined as “of or capable of perception; conscious. To be conscious is to have an awareness of oneself as a thinking being. As thinking beings we formulate ideas or thoughts and either act on them or exchange the ideas and thoughts with others through some communications medium. Can machines do this? No.

Machines are not aware of themselves. They can only take commands; follow their programming; respond to patterns. Their creator and programmer, man, does not even have a full understanding of the human brain, currently the only place where consciousness resides. Without an understanding of the brain and the consciousness it generates, replicating consciousness within a machine would be impossible.

But if machines became sentient ….

By 2020, futurist and inventor Ray Kurzweil believes that we will have computers that emulate the brain and by 2029 the brain will be “reversed engineered”; with all areas mapped and copied so that the “software” necessary for emulating all portions of the brain can be produced. In addition, Mr. Kurzweil believes that as information technology grows exponentially, machines will be able to improve on their own software design and that by 2045 the intelligence of machines will be so advanced that technology will have achieved a condition he calls “singularity.”

But if this singularity is achieved; if this state of self-recursive improvement is achieved, where a machine is able to examine itself and recognize ways to improve its design; to improve upon itself, then how should humans treat the machine at that point?

Since my interest is in pure digital data trade markets, I would like to know how humans will treat machines capable of interconnecting with each other over the internet and exchanging machine-generated information and value without human intervention? Will they receive the same level of privacy humans today seek regarding their personal data? Given the exponential growth Mr. Kurzweil references, will privacy law even matter to a sentient machine capable, probably, of outperforming the technology of the State? What type of regulatory scheme might government create in order to mitigate this scenario?

The year 2045 is only around the corner….

Does Facebook’s business model disrupt the political information markets?

Facebook is engaging in a war against misinformation and divisiveness in the United States as perpetrated via social media, according to published reports by Bloomberg and The Atlanta Journal Constitution. Having done a 180 degree turn from its position last year that its platform was not used to cause a disruption of public opinion leading up to the 2016 presidential election, Facebook is using artificial intelligence tools to identify inauthentic posts and user behavior.  With teams comprised of data scientists, policy experts, and engineers, Facebook is blocking fake accounts and vetting news stories posted on its site.

Critics doubt that Facebook’s attempts to thwart future social media influence will outweigh its incentives to distribute fictional political stories that keep people glued to Facebook while providing advertisers with millions of pairs of eyeballs.  Facebook, according its 10-K annual report, garners almost of its revenues from advertising.  In 2017, advertising made up 98% of Facebook’s revenues.  According to Facebook’s 10-K, at the top of the list of factors that could adversely impact advertising revenues: decreases in user engagement, including a decline in the time spent using the company’s products.

Having used Facebook for eleven years, I witnessed the increase in the use of the platform as a tool for political engagement.  Facebook has expanded opportunities for voters to vet politicians and their policies.  I have seen a significant number of posts, including memes and video, that got the facts wrong; that showed no knowledge of process, politics, or economics.  Cynicism, fear, passion, inaccuracies, sincerity, patriotism, anarchy, and indifference all run rampant on Facebook.  But do I buy the argument that messages placed on Facebook by Russian agents spread so much misinformation that America became suddenly divided overnight? That “Russian interference led to a Trump victory?

No.  The divisiveness was already there.  Giving a couple hundred million Americans the ability to quickly share their thoughts, accurate or not, on the political news of day simply tore away the scab.

Further evidence of divisiveness in American politics: print, broadcast, and cable media.  American media is meeting the demand of a divided public, with Fox News occupying the Right and MSNBC and CNN serving the frenzied Left.

What Washington may truly be afraid of is that politicians have less control over the channels through which they are vetted.  On the one hand, Jeffrey Rosen, president of the Constitution Center, shared the following with The Atlantic’s Jeffrey Goldberg:

“Twitter, Facebook, and other platforms have accelerated public discourse to warp speed, creating virtual versions of the mob.  Inflammatory posts based on passion travel farther and than arguments based on reason.  We are living, in short, in a Madisonian nightmare.”

On the other hand, Americans may be taking to Facebook, YouTube and Twitter in search of alternative opportunities to criticize the political packages and action plans that politicians offer in exchange for votes and increases in taxes.  The divisiveness may be stemming from an increased lack of enchantment with democracy itself.  After all, according to Professor Yuval Harari, democracies are “blips in history” depending on “unique technological conditions” and losing credibility as democracy faces more questions about its inability to provide for and maintain a middle class.

Democracy is hard up to explain why almost all the nine million jobs created post recovery from the 2007-2009 recession have been “gig work” paying little to no benefits.  Democracy has yet to come up with a solution to a wealth gap that the Left invests time in describing, laying blame at the feet of the rich yet coming up with no solutions for a society that prides itself on equal access to the ballot but still comes up short on adequate access to capital.

To the question whether Facebook’s business model has disrupted the political information markets, I would, for now, answer yes.  Facebook has contributed to bringing unreasonable, uninformed voices into the arena. I for one do not want to be lead or have policy fed by impassioned, unreasonable voices, no matter what part of the spectrum they fall on.  What the political class may have to look at for in the near term is that democracy may be less of a facilitator of a peaceful transfer of power between its factions as the mob continues to peel away the scab.