The John J. Reilly Center for Science, Technology, and Values at the University of Notre Dame has released its annual list of emerging ethical dilemmas and policy issues in science and technology for 2015. This marks the third year the Center has released a list.
The Reilly Center explores conceptual, ethical, and policy issues where science and technology intersect with society from different disciplinary perspectives. Its goal is to promote the advancement of science and technology for the common good.
The Center generates its annual list of emerging ethical dilemmas and policy issues in science and technology with the help of Reilly fellows, other Notre Dame experts, and friends of the center.
The Center aims to present a list of items for scientists and laypeople alike to consider in the coming months and years as new technologies develop. Each month in 2015, the Reilly Center will present an expanded set of resources for the issue with the most votes, giving readers more information, questions to ask, and references to consult.
The ethical dilemmas and policy issues for 2015 (presented in no particular order) are:
Real-time satellite surveillance video
What if Google Earth gave you real-time images instead of a snapshot 1-3 years old?
Companies such as Planet Labs, Skybox Imaging (recently purchased by Google), and Digital Globe have launched dozens of satellites in the last year with the goal of recording the status of the entire Earth in real time (or near real-time). The satellites themselves are getting cheaper, smaller, and more sophisticated (with resolutions up to 1 foot). Commercial satellite companies make this data available to corporations (or, potentially, private citizens with enough cash), allowing clients to see useful images of areas coping with natural disasters and humanitarian crises, but also data on the comings and goings of private citizens. How do we decide what should be monitored and how often? Should we use this data to solve crimes? What is the potential for abuse by corporations, governments, police departments, private citizens, or terrorists and other “bad actors”?
Resources:
Satellite Imaging Startups Skybox, Planet Labs Race to Cover Earth
How Swarms of Tiny Satellites are Creating a Real-Time Google Earth
SkyBox Gives Non Profits Access to Real-Time Satellite Imagery
Big Video Data Could Change Everything from Catching Bad Guys to Tracking Shoppers
Spy Satellites Fighting Crime from Space
Astronaut bioethics (of colonizing Mars)
Plans for long-term space missions to and the colonization of Mars are already underway. On December 5, NASA launched the Orion spacecraft and NASA Administrator Charles Bolden declared it "Day One of the Mars era." The company Mars One (along with Lockheed Martin and Surrey Satellite Technology) is planning to launch a robotic mission to Mars in 2018, with humans following in 2025. 418 men and 287 women from around the world are currently vying for four spots on the first one-way human settlement mission. But as we watch with interest as this unfolds, we might ask ourselves the following: Is it ethical to expose people to unknown levels of human isolation and physical danger (including exposure to radiation) for such a purpose? Will these pioneers lack privacy for the rest of their lives so that we might watch what happens? Is it ethical to conceive or birth a child in space or on Mars? And, if so, who protects the rights of a child not born on Earth and who did not consent to the risks? If we say no to children in space, does that mean we sterilize all astronauts who volunteer for the mission? Given the potential dangers of setting up a new colony severely lacking in resources, how would sick colonists be cared for? And beyond bioethics, we might ask how an off-Earth colony would be governed.
Resources:
Introduction to Astronaut Bioethics
NASA Mulls Ethics of Sending Astronauts on Long Space Voyages
Mars One - Is This Ethical?
Wearable technology
We are currently attached to (literally and figuratively) multiple technologies that monitor our behaviors. The fitness tracking craze has led to the development of dozens of bracelets and clip-on devices that monitor steps taken, activity levels, heart rate, etc, not to mention the advent of organic electronics that can be layered, printed, painted, or grown on human skin. Google is teaming up with Novartis to create a contact lens that monitors blood sugar levels in diabetics and sends the information to healthcare providers. Combine that with Google Glass and the ability to search the Internet for people while you look straight at them and you see that we’re already encountering social issues that need to be addressed. The new wave of wearable technology will allow users to photograph or record everything they see. It could even allow parents to view what their children are seeing in real time. Employers are experimenting with devices that track (volunteer) employees' movements, tone of voice, and even posture. For now, only the aggregate data is being collected and analyzed to help employers understand the average workday and how employees relate to each other. But could an employer require their workers to wear devices that monitor how they speak, what they eat, when they take a break, how stressed they get during a task, and then punish or reward them for good or bad data? Wearables have the potential to educate us, protect our health, as well as violate our privacy in any number of ways.
Resources:
7 Urgent Ethical Dilemmas for Wearable Tech
Wear This Device So Your Boss Knows You’re Losing Weight
Companies Are Putting Sensors On Employees To Track Their Every Move
Google Smart Contact Lens Focuses On Healthcare Billions
State-sponsored hacktivism and “soft war”
"Soft war" is a concept used to explain rights and duties of insurgents (and even terrorists) during armed conflict. Soft war encompasses tactics other than armed force to achieve political ends. Cyber war and hacktivism could be tools of soft war, if used in certain ways by states in inter-state conflict, as opposed to alienated individuals or groups (like "Anonymous").
We already live in a state of low-intensity cyber conflict. But as these actions become more aggressive, damaging infrastructure, how do we fight back? Does a nation have a right to defend itself against, or retaliate for, a cyber attack, and if so, under what circumstances? What if the aggressors are non-state actors? If a group of Chinese hackers launched an attack on the US, does that give the US government the right to retaliate against the Chinese government? In a soft war, what are the conditions of self-defense? May that self-defense be preemptive? Who can be attacked in a cyber war? We’ve already seen operations that hack into corporations and steal private citizens’ data. What's to stop attackers from hacking into our personal wearable devices? Are private citizens attacked by cyberwarriors just another form of collateral damage?
Resources:
Iran’s Mehmanparast Says ‘Soft War’ Cyber Attacks to Fail
Ethics 2014 | George Lucas: Cyber Hacktivism and 'Soft' War (Video)
Killer Apps or Apps that Kill? How the “Soft War” is replacing the Cold War in 2013
Enhanced pathogens
On October 17, the White House suspended research that would enhance the pathogenicity of viruses such as influenza, SARS, and MERS (often referred to as gain-of-function (GOF) research). Gain-of-function research, in itself, is not harmful; in fact, it is used to provide vital insights into viruses and how to treat them. But when it is used to increase mammalian transmissibility and virulence, the altered viruses pose serious security and biosafety risks. Those fighting to resume research claim that GOF research on viruses is both safe and important to science, insisting that no other form of research would be as productive. Those who argue against this type of research note that the biosafety risks far outweigh the benefits. They point to hard evidence of human fallibility and the history of laboratory accidents and warn that the release of such a virus into the general population would have devastating effects.
Resources:
White House Suspends Enhanced Pathogen Research
Experts Call For Alternatives to 'Gain-of-Function' Flu Studies
Risks and Benefits of Gain-of-Function Experiments with Pathogens of Pandemic Potential
An Epistemological Perspective on the Value of Gain-of-Function Experiments Involving Pathogens with Pandemic Potential
Doing Diligence to Assess the Risks and Benefits of Life Sciences Gain-of-Function Research
Non-lethal weapons
At first it may seem absurd that types of weapons that have been around since WWI and not designed to kill could be an emerging ethical or policy dilemma. But consider the recent development and proliferation of non-lethal weapons such as laser missiles, blinding weapons, pain rays, sonic weapons, electric weapons, heat rays, disabling malodorants, as well as the use of gases and sprays in both the military and domestic police forces (which are often the beneficiaries of older military equipment). These weapons may not kill (then again, there have been fatalities from non-lethal weapons), but they can cause serious pain, physical injuries, and long-term health consequences (the latter has not been fully investigated). We must also consider that non-lethal weapons may be used more liberally in situations that could be diffused by peaceful means (since there is technically no intent to kill), used indiscriminately (without regard for collateral damage), or be used as a means of torture (since the harm they cause may be undetectable after a period of time). These weapons can also be misused as a lethal force multiplier - a means of effectively incapacitating the enemy before employing lethal weapons. Non-lethal weapons are certainly preferable to lethal ones, given the choice, but should we continue to pour billions of dollars into weapons that increase the use of violence altogether?
Resources:
The Moral Dangers of Non-Lethal Weapons
Rubber Bullets in Missouri Clash Highlight Militarization of America’s Police
There’s No Such Thing as a Non-Lethal Weapon
Robot swarms
Researchers at Harvard University recently created a swarm of over 1000 robots, capable of communicating with each other to perform simple tasks such as arranging themselves into shapes and patterns. These "kilobots" require no human intervention beyond the original set of instructions and work together to complete tasks. These tiny bots are based on the swarm behavior of insects and can be used to perform environmental cleanups or respond to disasters where humans fear to tread. The concept of driverless cars also relies on this system, where the cars themselves (without human intervention, ideally) would communicate with each other to obey traffic laws and deliver people safely to their destinations. But should we be worried about the ethical and policy implications of letting robots work together without humans running interference? What happens if a bot malfunctions and causes harm? Who would be blamed for such an accident? What if tiny swarms of robots could be set up to spy or sabotage?
Resources:
The 1000-Robot Swarm
Robotic Construction Crew Needs No Foreman
Like a Swarm of Lethal Bugs
Artificial life forms
Research on artificial life forms is an area of synthetic biology focused on custom-building life forms to address specific purposes. Craig Venter and colleagues announced the first synthetic life form in 2010, created from an existing organism by introducing synthetic DNA.
Synthetic life allows scientists to study the origins of life by building it rather than breaking it down, but this technique blurs the line between life and machines and scientists foresee the ability to program organisms. The ethical and policy issues surrounding innovations in synthetic biology renew concerns raised previously with other biological breakthroughs and include safety issues and risk factors connected with releasing artificial life forms into the environment. Making artificial life forms has been deemed “playing God” because it allows individuals to create life that does not exist naturally. Gene patents have been a concern for several years now and synthetic organisms suggest a new dimension of this policy issue. While customized organisms may one day cure cancer, they may also be used as biological weapons.
Resources:
Scientists Add Letters to DNA’s Alphabet, Raising Hope and Fear
Scientists Move Closer to Inventing Artificial Life
Synthetic Biology: The Next Wave of Patents on Life
Craig Venter Creates Synthetic Life Form
Resilient social-ecological systems
We need to build resilient social and ecological systems that can tolerate being pushed to an extreme while maintaining their functionality either by returning to the previous state or by operating in a new state. Resilient systems endure external pressures such as those caused by climate change, natural disasters, and economic globalization. For example, a resilient electrical system is able to withstand extreme weather events or regain functionality quickly afterwards. A resilient ecosystem can maintain a complex web of life when one or more organism is overexploited and the system is stressed by climate change.
Who is responsible for devising and maintaining resilient systems? Both private and public companies are responsible for supporting and enhancing infrastructure that benefits the community. To what degree is it the responsibility of the federal government to assure that civil infrastructure is resilient to environmental changes? When individuals act in their own self-interest, there is the distinct possibility that their individual actions fail to maintain infrastructure and processes that are essential for all of society. This can lead to what Garrett Hardin in 1968 called the “tragedy of the commons,” in which many individuals making rational decisions based on their own interest undermine the collective’s best and long-term interests. To what extent is it the responsibility of the federal government to enact regulations that can prevent a “tragedy of the commons”?
Resources:
Vision: Resilient Ecosystems, Communities, and Economies
Tragedy of the Commons (Video)
USAID Launches New Initiative to Improve Bangladesh's Resilience to Climate Change
Torrens Resiliance Institute
Brain-to-brain interfaces
It’s no Vulcan mind meld, but brain-to-brain interfaces (BBI) have been achieved, allowing for direct communication from one brain to another without speech. The interactions can be between humans or between humans and animals.
In 2014, University of Washington researchers performed a BBI experiment that allowed a person command over another person about half a mile away, the goal being the simple task of moving their hand (communication so far has been one-way in that one person sends the commands and the other receives them). Using an electroencephalography (EEG) machine that detects brain activity in the sender and a transcranial magnetic stimulation coil that controls movement in the receiver we’ve achieved a BBI twice – this year scientists also transmitted words from brain-to-brain across 5,000 miles. In 2013, Harvard researchers led by Seung-Schik Yoo developed the first interspecies brain-to-brain interface, retrieving a signal from a human’s brain (generated by staring at a flashing light) and transmitting it into the motor cortex of a sleeping rat, causing the rodent to move its tail.
The ethical issues are myriad. What kind of neurosecurity can we put in place to protect individuals from having accidental information shared or removed from their brains (especially by hackers)? If two individuals share an idea, who is entitled to claim ownership? Who is responsible for the actions committed by the recipient of a thought if a separate thinker is dictating the actions?
Resources:
Fusing Faculties of Mind
‘Hola’ At Your Brain: First Brain-to-Brain Interface is a Step towards Digital Telepathy
UW Study Shows Director Brain Interface Between Humans
Neuroethics Journal Club: The Ethical Issues Behind Brain-to-Brain Interface (BTBI) Technologies