Congress hasn’t determined the artificial intelligence behavior it wants to regulate

Congress’ attempt at AI legislation

The 116th Congress has three bills or resolutions of note sitting in a number of committees that attempt to address the use of artificial intelligence in American society.  H.Res. 153 is intended to support the development of guidelines for the ethical development of artificial intelligence.

Meanwhile,  HR 2202 seeks to establish a coordinated Federal initiative to accelerate artificial intelligence research and development of the economic and national security of the United States, while H.R. 827 is concerned about training and retraining workers facing employment disruption by artificial intelligence.

The bills and resolution are more exploratory versus regulatory in nature.  They don’t expressly get at any behavior that needs to be regulated.  The nascent characteristic of artificial intelligence development along with government’s inability to keep regulation at the same pace as technology development may be in part the reason for the “let’s see what we have here” stance of early regulation.  In other words, Congress may just be getting a feel for what AI is while taking care not to interfere with the innovation needed to further develop the technology.

On the other hand, these bills could help Congress get ahead of the issue of labor disruption. Workers read about AI’s capacity to replace jobs that are more pedestrian or mundane; jobs currently occupied by lower income workers.  The prediction about labor disruption won’t leave certain higher-waged jobs untouched either as AI platforms, data mining, and machine learning threaten professions such as accounting or law.  By determining the data needed to thoroughly analyze the impact and growth of artificial intelligence; identifying the industries that will benefit most from or be harmed the most by AI; and comparing today’s existing job skills with the job skills that will be needed in order to work alongside AI, Congress contribute to alleviating labor force disruption.

The Human-Machine Relationship

Unlike a number of dire predictions of the emergence of “SkyNet” and terminator-like machines subjecting humans to slavery or worse, most analysts and commentators see AI as a tool that augments human capabilities, making humans better or more productive.  The emphasis will be on collaborative intelligence, with human and machine working together.  And how well that relationship works depends on how well humans program the machine.

Another consideration is artificial intelligence’s ability to self improve.  The goal of AI development is to build an artificial intelligent agent that can build another AI agent, preferably with even greater capabilities.  The vision is to move from AI’s narrow, single-task capabilities to a more general AI, a concept that sees AI exhibiting greater abilities across multiple tasks, much like humans.

Possible targets of legislation or regulation

If legislation or regulation is to target the machine-human relationship, elected officials and regulatory heads will have to consider their policy initiatives impact on:

  • the ability of humans to train machines;
  • the ability of humans to explain AI outcomes;
  • the ability of machines to amplify human cognitive skills;
  • the ability of machines to embody the the human skills necessary that lead to the extension of human physical capabilities;
  • the ability of artificial intelligence to self-improve;
  • the difference between “safe” AI (the ability to maintain consistency in AI outcomes) versus “ethical” AI (ensuring that AI platforms are not a threat to human life.

Conclusion

Just like the application of artificial intelligence, Congress’ foray into regulation of AI is nascent.  This is the time for AI’s stakeholders to either begin or maintain efforts to influence all levels of government on AI’s use in the commercial sector.

 

Advertisements

As machines become self-aware, will they need privacy law?

As machines become self aware, will they need legal protection? Maybe the question is a bit too far ahead, but discussions regarding artificial intelligence and machine learning had me contemplating the relationship between man and machine thirty or fifty years from now. At this stage I have admittedly more questions than answers but that is what exploration is all about. Given my interest in what I term pure digital information trade where machines are collecting, analyzing, and trading data and information among themselves without human intervention, and the potential for machines to become sentient, I am considering what the legal relationship will be between man and machine. Will man consider the self aware machine an “equal” in terms of sentient rights or will the machine continue to be what it is today: a tool and a piece of property?

What do we mean by “sentient?”

Sentient, according to Webster’s New World Dictionary, is defined as “of or capable of perception; conscious. To be conscious is to have an awareness of oneself as a thinking being. As thinking beings we formulate ideas or thoughts and either act on them or exchange the ideas and thoughts with others through some communications medium. Can machines do this? No.

Machines are not aware of themselves. They can only take commands; follow their programming; respond to patterns. Their creator and programmer, man, does not even have a full understanding of the human brain, currently the only place where consciousness resides. Without an understanding of the brain and the consciousness it generates, replicating consciousness within a machine would be impossible.

But if machines became sentient ….

By 2020, futurist and inventor Ray Kurzweil believes that we will have computers that emulate the brain and by 2029 the brain will be “reversed engineered”; with all areas mapped and copied so that the “software” necessary for emulating all portions of the brain can be produced. In addition, Mr. Kurzweil believes that as information technology grows exponentially, machines will be able to improve on their own software design and that by 2045 the intelligence of machines will be so advanced that technology will have achieved a condition he calls “singularity.”

But if this singularity is achieved; if this state of self-recursive improvement is achieved, where a machine is able to examine itself and recognize ways to improve its design; to improve upon itself, then how should humans treat the machine at that point?

Since my interest is in pure digital data trade markets, I would like to know how humans will treat machines capable of interconnecting with each other over the internet and exchanging machine-generated information and value without human intervention? Will they receive the same level of privacy humans today seek regarding their personal data? Given the exponential growth Mr. Kurzweil references, will privacy law even matter to a sentient machine capable, probably, of outperforming the technology of the State? What type of regulatory scheme might government create in order to mitigate this scenario?

The year 2045 is only around the corner….

The State’s role in integrating artificial intelligence into America’s economy

Artificial intelligence has the capability of creating another resource that can be optimized or consumed by a nation-state.  Increases in computing power and better designed algorithms along with access to increasing amounts of data translates into an increased amount of information that can be extracted via machine learning.

Venture capitalist Nick Hanauer postulates that a nation’s prosperity is a function of the rate at which we solve problems.  If he is correct, then problem solving requires that we maximize the amount of available information to find the best answer.

If information is the jet fuel for a Fourth Industrial Revolution economy, data is the oil that has to be extracted and refined. Companies such as Amazon, Facebook, and Google are using machine learning to provide better customer and subscriber experiences with their product.  They are among the largest of the data miners.  Their efforts, along with those of other technology companies is expected to contribute to economic growth beyond a baseline (no-artificial intelligence) scenario.

For example, Accenture reports that labor will see an increase in productivity of 35% by the year 2035 due to the application of artificial intelligence.  Annual growth rates in value added to gross domestic product are approximated at 4.6% by 2035. With capital and labor (due to a cap on the capacity of cognitive ability) reaching their limits as contributors to increased economic growth, artificial intelligence, taking its place along capital, labor, and entrepreneurship as a factor of production, is expected to help the economy exceed its current limits in three ways:

  1. Automating physical tasks as a result of artificial intelligence’s ability to self-learn;
  2. Augmenting labor by giving labor the opportunity to focus on creativity, imagination, and innovation; and
  3. Diffusing innovation through the economy.

With these promises of growth comes the fear on the part of labor that artificial intelligence will eliminate the need for a substantial portion of current jobs.  Even while experts and academics tout artificial intelligence as a complement to labor; as an augmenter of labor’s cognitive skills, there is still the fear that this emerging technology will create a valueless human workforce.  This perception creates a dilemma for a government that sees democracy under the attack globally.  Is artificial intelligence going to exclude millions in the name of efficiency? If so, what use is there from participating equally in an electoral process of the economy leaves you out?

Government will have to prepare a messaging campaign if it is to maintain its legitimacy as a distributor of economic equity in the face of an increasingly digitized economy and society. The potential destructive nature of artificial intelligence is scarier than what has been presented in movies like “2001: A Space Odyssey” or “Terminator.” Immediate benefits of artificial intelligence may flow first to those who already have high tech skills or hold or have access to great amounts of capital. In other words, AI is the ultimate nail in the coffin for the capital gap. Those with access to or control of capital will only see their control over the data and information that feeds it get larger. If you can’t process data or package useful information, you are nonexistent. Just useless furniture. It won’t be some AI robot that kills you off. It will be a human with money and enhanced cognitive skills that decide we are valueless.

As Erik Brynjolfsson, Xiang Hui, and Meng Liu pointed out last month in an article for The Washington Post last month, “No economic law guarantees that productivity growth benefits everyone equally.  Unless we  thoughtfully manage the transition, some people, even a majority, are vulnerable to being left behind even as others reap billions.”

As Professor Yuri Harari notes, technology is not deterministic, however.  It is people who make decisions as to how their political economy will shift and change.  Brynjolfsson, Hui, and Liu note that voters need to urge policymakers to “invest in research that will design approaches to human learning for an era of machine learning.”

The evidence does not show that policymakers are being prodded to move on the issue of artificial intelligence. Not surprising since voters are not knowledgeable about the issue either.  Artificial intelligence is not on the top of any poll responses from voters.  As regards to Congress, the only major action has been companion bills S.2217 and HR4625 where Congress wants the Secretary of Commerce to establish a federal advisory committee on the development and implementation of artificial intelligence.  While the bills provide good working definitions of artificial intelligence and machine learning and has among its concerns economic productivity, job growth, and labor displacement, allowing a bill to sit in committee for ten months is not the kind of speedy intelligence that artificial intelligence needs to be complemented by.