Congress hasn’t determined the artificial intelligence behavior it wants to regulate

Congress’ attempt at AI legislation

The 116th Congress has three bills or resolutions of note sitting in a number of committees that attempt to address the use of artificial intelligence in American society.  H.Res. 153 is intended to support the development of guidelines for the ethical development of artificial intelligence.

Meanwhile,  HR 2202 seeks to establish a coordinated Federal initiative to accelerate artificial intelligence research and development of the economic and national security of the United States, while H.R. 827 is concerned about training and retraining workers facing employment disruption by artificial intelligence.

The bills and resolution are more exploratory versus regulatory in nature.  They don’t expressly get at any behavior that needs to be regulated.  The nascent characteristic of artificial intelligence development along with government’s inability to keep regulation at the same pace as technology development may be in part the reason for the “let’s see what we have here” stance of early regulation.  In other words, Congress may just be getting a feel for what AI is while taking care not to interfere with the innovation needed to further develop the technology.

On the other hand, these bills could help Congress get ahead of the issue of labor disruption. Workers read about AI’s capacity to replace jobs that are more pedestrian or mundane; jobs currently occupied by lower income workers.  The prediction about labor disruption won’t leave certain higher-waged jobs untouched either as AI platforms, data mining, and machine learning threaten professions such as accounting or law.  By determining the data needed to thoroughly analyze the impact and growth of artificial intelligence; identifying the industries that will benefit most from or be harmed the most by AI; and comparing today’s existing job skills with the job skills that will be needed in order to work alongside AI, Congress contribute to alleviating labor force disruption.

The Human-Machine Relationship

Unlike a number of dire predictions of the emergence of “SkyNet” and terminator-like machines subjecting humans to slavery or worse, most analysts and commentators see AI as a tool that augments human capabilities, making humans better or more productive.  The emphasis will be on collaborative intelligence, with human and machine working together.  And how well that relationship works depends on how well humans program the machine.

Another consideration is artificial intelligence’s ability to self improve.  The goal of AI development is to build an artificial intelligent agent that can build another AI agent, preferably with even greater capabilities.  The vision is to move from AI’s narrow, single-task capabilities to a more general AI, a concept that sees AI exhibiting greater abilities across multiple tasks, much like humans.

Possible targets of legislation or regulation

If legislation or regulation is to target the machine-human relationship, elected officials and regulatory heads will have to consider their policy initiatives impact on:

  • the ability of humans to train machines;
  • the ability of humans to explain AI outcomes;
  • the ability of machines to amplify human cognitive skills;
  • the ability of machines to embody the the human skills necessary that lead to the extension of human physical capabilities;
  • the ability of artificial intelligence to self-improve;
  • the difference between “safe” AI (the ability to maintain consistency in AI outcomes) versus “ethical” AI (ensuring that AI platforms are not a threat to human life.


Just like the application of artificial intelligence, Congress’ foray into regulation of AI is nascent.  This is the time for AI’s stakeholders to either begin or maintain efforts to influence all levels of government on AI’s use in the commercial sector.



As machines become self-aware, will they need privacy law?

As machines become self aware, will they need legal protection? Maybe the question is a bit too far ahead, but discussions regarding artificial intelligence and machine learning had me contemplating the relationship between man and machine thirty or fifty years from now. At this stage I have admittedly more questions than answers but that is what exploration is all about. Given my interest in what I term pure digital information trade where machines are collecting, analyzing, and trading data and information among themselves without human intervention, and the potential for machines to become sentient, I am considering what the legal relationship will be between man and machine. Will man consider the self aware machine an “equal” in terms of sentient rights or will the machine continue to be what it is today: a tool and a piece of property?

What do we mean by “sentient?”

Sentient, according to Webster’s New World Dictionary, is defined as “of or capable of perception; conscious. To be conscious is to have an awareness of oneself as a thinking being. As thinking beings we formulate ideas or thoughts and either act on them or exchange the ideas and thoughts with others through some communications medium. Can machines do this? No.

Machines are not aware of themselves. They can only take commands; follow their programming; respond to patterns. Their creator and programmer, man, does not even have a full understanding of the human brain, currently the only place where consciousness resides. Without an understanding of the brain and the consciousness it generates, replicating consciousness within a machine would be impossible.

But if machines became sentient ….

By 2020, futurist and inventor Ray Kurzweil believes that we will have computers that emulate the brain and by 2029 the brain will be “reversed engineered”; with all areas mapped and copied so that the “software” necessary for emulating all portions of the brain can be produced. In addition, Mr. Kurzweil believes that as information technology grows exponentially, machines will be able to improve on their own software design and that by 2045 the intelligence of machines will be so advanced that technology will have achieved a condition he calls “singularity.”

But if this singularity is achieved; if this state of self-recursive improvement is achieved, where a machine is able to examine itself and recognize ways to improve its design; to improve upon itself, then how should humans treat the machine at that point?

Since my interest is in pure digital data trade markets, I would like to know how humans will treat machines capable of interconnecting with each other over the internet and exchanging machine-generated information and value without human intervention? Will they receive the same level of privacy humans today seek regarding their personal data? Given the exponential growth Mr. Kurzweil references, will privacy law even matter to a sentient machine capable, probably, of outperforming the technology of the State? What type of regulatory scheme might government create in order to mitigate this scenario?

The year 2045 is only around the corner….