As machines become self-aware, will they need privacy law?

As machines become self aware, will they need legal protection? Maybe the question is a bit too far ahead, but discussions regarding artificial intelligence and machine learning had me contemplating the relationship between man and machine thirty or fifty years from now. At this stage I have admittedly more questions than answers but that is what exploration is all about. Given my interest in what I term pure digital information trade where machines are collecting, analyzing, and trading data and information among themselves without human intervention, and the potential for machines to become sentient, I am considering what the legal relationship will be between man and machine. Will man consider the self aware machine an “equal” in terms of sentient rights or will the machine continue to be what it is today: a tool and a piece of property?

What do we mean by “sentient?”

Sentient, according to Webster’s New World Dictionary, is defined as “of or capable of perception; conscious. To be conscious is to have an awareness of oneself as a thinking being. As thinking beings we formulate ideas or thoughts and either act on them or exchange the ideas and thoughts with others through some communications medium. Can machines do this? No.

Machines are not aware of themselves. They can only take commands; follow their programming; respond to patterns. Their creator and programmer, man, does not even have a full understanding of the human brain, currently the only place where consciousness resides. Without an understanding of the brain and the consciousness it generates, replicating consciousness within a machine would be impossible.

But if machines became sentient ….

By 2020, futurist and inventor Ray Kurzweil believes that we will have computers that emulate the brain and by 2029 the brain will be “reversed engineered”; with all areas mapped and copied so that the “software” necessary for emulating all portions of the brain can be produced. In addition, Mr. Kurzweil believes that as information technology grows exponentially, machines will be able to improve on their own software design and that by 2045 the intelligence of machines will be so advanced that technology will have achieved a condition he calls “singularity.”

But if this singularity is achieved; if this state of self-recursive improvement is achieved, where a machine is able to examine itself and recognize ways to improve its design; to improve upon itself, then how should humans treat the machine at that point?

Since my interest is in pure digital data trade markets, I would like to know how humans will treat machines capable of interconnecting with each other over the internet and exchanging machine-generated information and value without human intervention? Will they receive the same level of privacy humans today seek regarding their personal data? Given the exponential growth Mr. Kurzweil references, will privacy law even matter to a sentient machine capable, probably, of outperforming the technology of the State? What type of regulatory scheme might government create in order to mitigate this scenario?

The year 2045 is only around the corner….

Advertisements

Author: Alton Drew

I graduated from the Florida State University with a Bachelor of Science in economics and political science (1984); a Master of Public Administration (1993); and a Juris Doctor (1999). You can follow me on Twitter @altondrew.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.