• Search Input Search Submit
  • Code of Ethics
  • Code of Ethics Case Studies

ACM Code of Ethics and Professional Conduct

Using the Code

Case studies.

The ACM Code of Ethics and Professional Practice (“the Code”) is meant to inform practice and education. It is useful as the conscience of the profession, but also for individual decision-making.

As prescribed by the Preamble of the Code, computing professionals should approach the dilemma with a holistic reading of the principles and evaluate the situation with thoughtful consideration to the circumstances. In all cases, the computing professional should defer to the public good as the paramount consideration. The analyses in the following cases highlight the intended interpretations of members of the 2018 Code task force, and should help guide computing professionals in how to apply the Code to various situations.

Case Study: Malware

Rogue Services touts its web hosting as “cheap, guaranteed uptime, no matter what.” While some of Rogue’s clients are independent web-based retailers, most are focused on malware and spam, which leverage Rogue for continuous delivery. Corrupted advertisements often link to code hosted on Rogue to exploit browser vulnerabilities to infect machines with ransomware. Rogue refuses to intervene with these services despite repeated requests.

computer ethics case study example

Case Study: Medical Implants

Corazón is a medical technology startup that builds implantable heart health monitoring devices. After being approved by multiple countries’ medical device regulation agencies, Corazón quickly gained market share based on the ease of use of the app and the company’s vocal commitment to securing patients’ information. Corazón also worked with several charities to provide free or reduced access to patients living below the poverty line.

computer ethics case study example

Case Study: Abusive Workplace Behavior

A new hire with the interactive technologies team, Diane became the target of team leader Max’s tirades when she committed a code update that introduced a timing glitch in a prototype shortly before a live demo. Diane approached the team’s manager, Jean, about Max’s abusive behavior. Jean agreed that the experience was unpleasant, but that was the price to pay for working in an intense, industry-leading team.

computer ethics case study example

Case Study: Automated Active Response Weaponry

Q Industries is an international defense contractor specializing in autonomous vehicles. As an early pioneer in passive systems, such as bomb-defusing robots and crowd-monitoring drones, Q established itself as a vendor of choice for military and law enforcement applications. Q’s products have been deployed in a variety of settings, including conflict zones and nonviolent protests. Recently, however, Q has begun to experiment with automated active responses.

computer ethics case study example

Case Study: Dark UX Patterns

The change request Stewart received was simple: replace the website’s rounded rectangle buttons with arrows, and adjust the color palette to one that mixes red and green text. But he found the prototype confusing. He suggested to his manager that this design would probably trick users into more expensive options they didn’t want. The response was that these were the changes requested by the client.

computer ethics case study example

Case Study: Malicious Inputs to Content Filters

The U.S. Children’s Internet Protection Act (CIPA) mandates that public schools and libraries employ mechanisms to block inappropriate matter on the grounds that it is deemed harmful to minors. Blocker Plus is an automated Internet content filter designed to help these institutions comply with CIPA’s requirements. During a review session, the development team reviewed a number of complaints about content being blocked inappropriately.

computer ethics case study example

Guiding Members with a Framework of Ethical Conduct

Learn more about ACM’s commitment to ethical standards: the ACM Code of Ethics, Software Engineering Code of Ethics and Professional Practice, and Committee on Professional Ethics (COPE), which is guiding these and other intiatives.

computer ethics case study example

Ask an Ethicist

Ask an Ethicist invites ethics questions related to computing or technology. Have an interesting question, puzzle or conundrum? Submit yours via a form, and the ACM Committee on Professional Ethics (COPE) will answer a selection of them on the site.

computer ethics case study example

Guidance in Addressing Real-World Ethical Challenges

The Integrity Project, created by ACM's Committee on Professional Ethics, is a series of resources designed to aid ethical decision making. It includes case studies demonstrating how the principles can be applied to specific ethical challenges, and an Ask an Ethicist advice column to help computing professionals navigate the sometimes challenging choices that can arise in the course of their work.

computer ethics case study example

Supporting the Professionalism of ACM Members

The ACM Committee on Professional Ethics (COPE) is responsible for promoting ethical conduct among computing professionals by publicizing the Code of Ethics and by offering interpretations of the Code; planning and reviewing activities to educate membership in ethical decision making on issues of professional conduct; and reviewing and recommending updates to the Code of Ethics and its guidelines.

computer ethics case study example

Stanford Computer Ethics Case Studies and Interviews

Case studies.

  • Algorithmic Decision-Making and Accountability
  • Autonomous Vehicles
  • Facial Recognition
  • Power of Private Platforms
  • Joshua Browder interview

Division of Hughes Aircraft shipped hybrid microelectronics to every branch of the U.S. military without completing various environmental chip testing processes required by contract. This is a whistle-blower case where the allegations against Hughes Aircraft resulted in a criminal case and a civil case.

(PDF case, 36 ms pages) (PDF case, 42 ms pages) (PDF case, 41 ms pages) (PDF case, 35 ms pages)

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Fostering ethical thinking in computing

Press contact :.

Four stock images arranged in a rectangle: a photo of a person with glow-in-the-dark paint splattered on her face, an aerial photo of New York City at night, photo of a statue of a blind woman holding up scales and a sword, and an illustrated eye with a human silhouette in the pupil

Previous image Next image

Traditional computer scientists and engineers are trained to develop solutions for specific needs, but aren’t always trained to consider their broader implications. Each new technology generation, and particularly the rise of artificial intelligence, leads to new kinds of systems, new ways of creating tools, and new forms of data, for which norms, rules, and laws frequently have yet to catch up. The kinds of impact that such innovations have in the world has often not been apparent until many years later.

As part of the efforts in Social and Ethical Responsibilities of Computing (SERC) within the MIT Stephen A. Schwarzman College of Computing, a new case studies series examines social, ethical, and policy challenges of present-day efforts in computing with the aim of facilitating the development of responsible “habits of mind and action” for those who create and deploy computing technologies.

“Advances in computing have undeniably changed much of how we live and work. Understanding and incorporating broader social context is becoming ever more critical,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing. “This case study series is designed to be a basis for discussions in the classroom and beyond, regarding social, ethical, economic, and other implications so that students and researchers can pursue the development of technology across domains in a holistic manner that addresses these important issues.”

A modular system

By design, the case studies are brief and modular to allow users to mix and match the content to fit a variety of pedagogical needs. Series editors David Kaiser and Julie Shah, who are the associate deans for SERC, structured the cases primarily to be appropriate for undergraduate instruction across a range of classes and fields of study.

“Our goal was to provide a seamless way for instructors to integrate cases into an existing course or cluster several cases together to support a broader module within a course. They might also use the cases as a starting point to design new courses that focus squarely on themes of social and ethical responsibilities of computing,” says Kaiser, the Germeshausen Professor of the History of Science and professor of physics.

Shah, an associate professor of aeronautics and astronautics and a roboticist who designs systems in which humans and machines operate side by side, expects that the cases will also be of interest to those outside of academia, including computing professionals, policy specialists, and general readers. In curating the series, Shah says that “we interpret ‘social and ethical responsibilities of computing’ broadly to focus on perspectives of people who are affected by various technologies, as well as focus on perspectives of designers and engineers.”

The cases are not limited to a particular format and can take shape in various forms — from a magazine-like feature article or Socratic dialogues to choose-your-own-adventure stories or role-playing games grounded in empirical research. Each case study is brief, but includes accompanying notes and references to facilitate more in-depth exploration of a given topic. Multimedia projects will also be considered. “The main goal is to present important material — based on original research — in engaging ways to broad audiences of non-specialists,” says Kaiser.

The SERC case studies are specially commissioned and written by scholars who conduct research centrally on the subject of the piece. Kaiser and Shah approached researchers from within MIT as well as from other academic institutions to bring in a mix of diverse voices on a spectrum of topics. Some cases focus on a particular technology or on trends across platforms, while others assess social, historical, philosophical, legal, and cultural facets that are relevant for thinking critically about current efforts in computing and data sciences.

The cases published in the inaugural issue place readers in various settings that challenge them to consider the social and ethical implications of computing technologies, such as how social media services and surveillance tools are built; the racial disparities that can arise from deploying facial recognition technology in unregulated, real-world settings; the biases of risk prediction algorithms in the criminal justice system; and the politicization of data collection.

"Most of us agree that we want computing to work for social good, but which good? Whose good? Whose needs and values and worldviews are prioritized and whose are overlooked?” says Catherine D’Ignazio, an assistant professor of urban science and planning and director of the Data + Feminism Lab at MIT.

D’Ignazio’s case for the series, co-authored with Lauren Klein, an associate professor in the English and Quantitative Theory and Methods departments at Emory University, introduces readers to the idea that while data are useful, they are not always neutral. “These case studies help us understand the unequal histories that shape our technological systems as well as study their disparate outcomes and effects. They are an exciting step towards holistic, sociotechnical thinking and making."

Rigorously reviewed

Kaiser and Shah formed an editorial board composed of 55 faculty members and senior researchers associated with 19 departments, labs, and centers at MIT, and instituted a rigorous peer-review policy model commonly adopted by specialized journals. Members of the editorial board will also help commission topics for new cases and help identify authors for a given topic.

For each submission, the series editors collect four to six peer reviews, with reviewers mostly drawn from the editorial board. For each case, half the reviewers come from fields in computing and data sciences and half from fields in the humanities, arts, and social sciences, to ensure balance of topics and presentation within a given case study and across the series.

“Over the past two decades I’ve become a bit jaded when it comes to the academic review process, and so I was particularly heartened to see such care and thought put into all of the reviews," says Hany Farid, a professor at the University of California at Berkeley with a joint appointment in the Department of Electrical Engineering and Computer Sciences and the School of Information. “The constructive review process made our case study significantly stronger.”

Farid’s case, “The Dangers of Risk Prediction in the Criminal Justice System,” which he penned with Julia Dressel, recently a student of computer science at Dartmouth College, is one of the four commissioned pieces featured in the inaugural issue.

Cases are additionally reviewed by undergraduate volunteers, who help the series editors gauge each submission for balance, accessibility for students in multiple fields of study, and possibilities for adoption in specific courses. The students also work with them to create original homework problems and active learning projects to accompany each case study, to further facilitate adoption of the original materials across a range of existing undergraduate subjects.

“I volunteered to work with this group because I believe that it's incredibly important for those working in computer science to include thinking about ethics not as an afterthought, but integrated into every step and decision that is made, says Annie Snyder, a mathematical economics sophomore and a member of the MIT Schwarzman College of Computing’s Undergraduate Advisory Group. “While this is a massive issue to take on, this project is an amazing opportunity to start building an ethical culture amongst the incredibly talented students at MIT who will hopefully carry it forward into their own projects and workplace.”

New sets of case studies, produced with support from the MIT Press’ Open Publishing Services program, will be published twice a year via the Knowledge Futures Group’s  PubPub platform . The SERC case studies are made available for free on an open-access basis, under Creative Commons licensing terms. Authors retain copyright, enabling them to reuse and republish their work in more specialized scholarly publications.

“It was important to us to approach this project in an inclusive way and lower the barrier for people to be able to access this content. These are complex issues that we need to deal with, and we hope that by making the cases widely available, more people will engage in social and ethical considerations as they’re studying and developing computing technologies,” says Shah.

Share this news article on:

Related links.

  • MIT Case Studies in Social and Ethical Responsibilities of Computing
  • Program in Science, Technology, and Society

Related Topics

  • Technology and society
  • Education, teaching, academics
  • Artificial intelligence
  • Computer science and technology
  • Diversity and inclusion
  • Program in STS
  • History of science
  • Aeronautical and astronautical engineering
  • Electrical Engineering & Computer Science (eecs)
  • Urban studies and planning
  • Human-computer interaction
  • MIT Sloan School of Management
  • School of Architecture and Planning
  • School of Humanities Arts and Social Sciences

Related Articles

Milo Phillips-Brown (left) and Marion Boulicault are part of a team working on transforming technology ethics education at MIT.

3 Questions: Marion Boulicault and Milo Phillips-Brown on ethics in a technical curriculum

MIT Schwarzman College of Computing leadership team (left to right) David Kaiser, Daniela Rus, Dan Huttenlocher, Julie Shah, and Asu Ozdaglar

A college for the computing age

woman in profile

Computing and artificial intelligence: Humanistic perspectives from MIT

(l-r) Julie Shah, Melissa Nobles

3 Questions: The social implications and responsibilities of computing

Previous item Next item

More MIT News

Erica James and the cover of her book Life at the Center, Haitians and Corporate Catholicism in Boston

A home away from a homeland

Read full story →

14 smiling volunteers and one child stand side by side, each holding a piece of colorful artwork.

“UnrulyArt” creates joy and engagement, regardless of ability

Owen Coté in front of bookshelves

Owen Coté, military technology expert and longtime associate director of the Security Studies Program, dies at 63

Close-up of thousands of tiny butterfly scales in yellow, orange, and gray.

What happens during the first moments of butterfly scale formation

Aerial shot of powerlines and grass field

Startup aims to transform the power grid with superconducting transmission lines

In a darkened room, Katie Chun steps out of the Momo habitat, a geodesic dome-like structure.

Designing for outer space

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Computer Ethics: A Case-Based Approach

Computer Ethics: A Case-Based Approach teaches students to solve ethical dilemmas in the field of computing, taking a philosophical, rather than a legal, approach to the topic. It first examines the principles of Idealism, Realism, Pragmatism, Existentialism, and Philosophical Analysis, explaining how each might be adopted as a basis for solving computing dilemmas. The book then presents a worksheet of key questions to be used in solving dilemmas. Twenty-nine cases, drawn from the real-life experiences of computer professionals, are included in the book as a means of letting students experiment with solving ethical dilemmas and identify the philosophical underpinnings of the solutions.

Robert N. Barger is an associate professor in the Computer Applications Program at the University of Notre Dame and professor emeritus at Eastern Illinois University, where he received several awards for teaching excellence. He has spent the last thirty-six years teaching and writing on computer ethics and educational issues.

Robert N. Barger

University of notre dame.

CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo, Delhi Cambridge University Press 32 Avenue of the Americas, New York, NY 10013-2473, USA

www.cambridge.org Information on this title: www.cambridge.org/9780521709149

© Robert Newton Barger 2008

This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.

First published 2008 Printed in the United States of America

A catalog record for this publication is available from the British Library.

Library of Congress Cataloging in Publication Data Barger, Robert N., 1938– Computer ethics : a case-based approach / Robert N. Barger. p. cm. Includes bibliographical references and index. ISBN 978-0-521-88251-4 (hardback) – ISBN 978-0-521-70914-9 (pbk.) 1. Computers – Moral ethical aspects. 2. Information technology – Social aspects. I. Title. QA 76.9.M65B37 2008 303.48′33 – dc22 2007049088

ISBN 978-0-521-88251-4 hardback ISBN 978-0-521-70914-9 paperback

Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party Internet Web sites referred to in this publication and does not guarantee that any content on such Web sites is, or will remain, accurate or appropriate.

This book is dedicated to my lovely wife, Jo,

with love and gratitude.

xiii
xvii
1
1
6
7
8
8
14
15
16
16
18
18
19
20
21
22
22
23
24
25
25
28
42
45
51
55
57
58
58
59
59
59
61
62
63
63
64
65
65
66
68
69
70
70
73
74
77
78
80
80
80
82
83
84
85
85
85
85
86
86
86
87
88
89
90
90
98
105
106
106
107
107
107
108
112
128
136
139
139
140
140
141
142
143
143
150
152
152
154
154
154
155
156
157
158
159
160
162
162
163
164
165
165
166
168
168
169
171
172
173
173
175
176
177
177
177
178
179
179
179
180
182
184
185
186
186
188
189
191
192
193
196
196
197
197
197
198
199
200
202
202
205
205
207
207
208
208
209
210
210
211
211
214
215
216
216
216
216
217
217
217
218
218
218
219
219
220
220
223
226
226
227
231
239
243

computer ethics case study example

  • IT legislation and regulation

computer ethics case study example

adiruch na chiangmai - stock.ado

Top 10 technology and ethics stories of 2020

Here are computer weekly’s top 10 technology and ethics stories of 2020.

Sebastian Klovig Skelton

  • Sebastian Klovig Skelton, Data & ethics editor

The year 2020 has been shaped by the global pandemic and international outcry over institutional racism and white supremacy.

A number of technology companies, for example, came under sustained scrutiny for their ties to law enforcement and how, despite their proclamations of using “tech for good”, their products are used to further entrench racist policing practices. 

Facial recognition was another major focus of Computer Weekly’s 2020 coverage. On the one hand, police use of the technology in south Wales has been found unlawful, while on the other, both public and private sector bodies are racing to develop facial recognition that can work on people wearing masks or other face coverings, which could severely limit people’s ability to protest or even exercise their basic privacy rights.

Big tech also came under fire from lawmakers around the world for their anti-competitive business practices, bringing the possibility of legal anti-trust action much closer to reality, and Amazon in particular caught flak for its poor treatment of workers throughout the pandemic.

Computer Weekly also looked at where the raw materials that technology companies rely on – such as cobalt, coltan and lithium – are sourced from, and the negative consequences this has for people living in these mineral-rich areas.

Here are Computer Weekly’s top 10 technology and ethics stories of 2020:

1. Technology firms come under scrutiny for ties to law enforcement

Following a massive international backlash against police racism and brutality sparked by the killing of George Floyd in Minneapolis in May 2020, private technology companies started coming under increased scrutiny for their relationships with law enforcement .

Within a month, the protests prompted tech giants Amazon, Microsoft and IBM to halt sales of their respective facial-recognition technologies to US law enforcement agencies. However, all three remained silent on how other technologies, such as predictive algorithms and body-worn video cameras, can also be used to fuel racial injustice and discriminatory policing.

Despite the moves, which were condemned by some as merely a public relations stunt, many privacy campaigners were not satisfied and are continuing to push for a permanent ban on the technology’s use.

“There should be a nation-wide ban on government use of face surveillance,” said the Electronic Frontier Foundation in a blog post . “Even if the technology were highly regulated, its use by the government would continue to exacerbate a policing crisis in this nation that disproportionately harms black Americans, immigrants, the unhoused, and other vulnerable populations.”

2. Upcoming EU conflict minerals regulation does not cover technology companies

The European Union’s upcoming Conflict Minerals Regulation is designed to stem the flow of 3TG minerals (tin, tantalum, tungsten and gold) from conflict zones and other high-risk areas. However, upon closer inspection Computer Weekly found a number of loopholes in the new rules that mean multinational technology companies – which rely on these vital natural resources for their products and components – are not covered.

For example, the technology companies will not be obliged to monitor, track or otherwise act to remove the minerals from their global supply chains; a number of minerals key to the tech industry, such as cobalt and lithium, are ignored by the regulation; and companies will not even be penalised if found to be in breach of the rules.

As is the case with previous regulatory or legislative attempts to deal with conflict minerals, the regulation will also do very little for those living and working on the ground in mineral-rich conflict zones such as the Democratic Republic of Congo.

Those Computer Weekly spoke to instead suggested moving away from voluntary corporate governance and social responsibility models to focus on increasing the productive capacity of those living in conflict zones, so they can develop their own solutions to what are essentially deeply political conflicts.

3. UK universities partner with Home Office and police in facial recognition project to identify hidden faces

In early March, it came to light that the Home Office and the Metropolitan Police Service were collaborating with UK universities on a live facial recognition (LFR) project , known as “face matching for automatic identity retrieval, recognition, verification and management”, or FACER2VM, which could identify people wearing masks or other face coverings.

According to information listed on UK Research and Innovation , the project coordinators expected their research to have a substantial impact.

“The societal impact is anticipated to be multifaceted,” it said. “Unconstrained face biometrics capability will significantly contribute to the government’s security agenda in the framework of smart cities and national security. It can effectively facilitate access to public services.”

While reports by other media outlets focused on FACER2VM’s connection to Jiangnan University, which sparked fears that the project could enhance the Chinese government’s ability to identify both masked protesters in Hong Kong and Uighur Muslims in Xinjiang, the use of this technology by UK police or security services is also worrying, as general LFR has already been used against protestors in south Wales, while officers across Britain now routinely film gatherings and demonstrations.

4. Amazon logistics workers strike over concerns about workplace safety

In mid-April, shortly after official lockdowns went into effect around the world, online retail giant Amazon – which has done very well financially during the pandemic – was hit by a wave of strikes across its European and North American warehouses as frontline logistics workers protested against “unsafe working conditions” and “corporate inaction”.

While the striking workers complained about a lack of protective latex gloves and hand sanitiser, overcrowding during shifts and high barriers to quarantine pay, the initial wave kicked off in Spain and Italy after Amazon refused to shut down its facilities after learning that a number of workers had contracted the coronavirus.

Following a similar pattern to their European counterparts, workers in the US began taking strike action after Amazon decided to keep warehouses open.

A number of Amazon employees have since been fired for either taking part in the strikes or showing public support for those who did – allegations that Amazon continues to contest.

5. Fired Amazon employee Christian Smalls speaks to Computer Weekly about his treatment

After reporting on the initial wave of Amazon strikes, Computer Weekly got in touch with Christian Smalls, a process assistant at Amazon’s Staten Island warehouse in New York, who was the first person fired for speaking out about the alleged state of its warehouses during the pandemic.

The termination of Smalls’ employment remains a contentious issue, with both parties giving different versions of events.

Smalls told Computer Weekly he was just the first in a growing line of people allegedly fired by Amazon for speaking out or protesting about Covid-related issues, despite Amazon’s claims that the employees were dismissed for violating various guidelines or internal policies.

This includes the firing of user experience designers Emily Cunningham and Maren Costa , organisers in the Amazon Employees for Climate Justice ( AECJ ) campaign group who publicly denounced Amazon’s treatment of employees such as Smalls.

It also includes Minnesota warehouse worker Bashir Mohamed , who was advocating better work conditions and pushing for more rigorous cleaning measures.

6. Surveillance capitalism in the age of Covid-19

In May, Computer Weekly interviewed Shoshana Zuboff, author of The age of surveillance capitalism: the fight for a human future at the new frontier of power (2019), to discuss how the practice of surveillance capitalism is intersecting with the Covid-19 coronavirus pandemic and public health crisis.

As part of a growing body of work – alongside texts such as Safiya Noble’s Algorithms of oppression and McKenzie Wark’s Capital is dead: is this something worse? – that seeks to analyse and explain the increasingly pivotal role of information and data in our economic, social and political lives , The age of surveillance capitalism argues that human experience (our experience) is captured in data, which is then repackaged in what Zuboff calls “prediction products”.

These are then sold in “behavioural futures markets”, making us and our experiences the raw material of these products, which are then sold to other companies in closed business-to-business markets.

Zuboff told Computer Weekly that the current health crisis presents a massive opportunity for surveillance capitalism, adding: “While it is a crisis for all of us, it is something like business as usual for surveillance capitalists, in the sense that it is an opportunity to, possibly, significantly enhance their behavioural data supply chains.”

She concluded that the fight against surveillance capitalism is a problem of collective action: “We need new social movements, we need new forms of social solidarity. Lawmakers need to feel our pressure at their backs.”

7. Auditing for algorithmic discrimination

Although awareness of algorithms and their potential for discrimination have increased significantly over the past five years, Gemma Galdon Clavell, director of Barcelona-based algorithmic auditing consultancy Eticas, told Computer Weekly that too many in the tech sector still wrongly see technology as socially and politically neutral, creating major problems in how algorithms are developed and deployed.

On top of this, Galdon Clavell said most organisations deploying algorithms have very little awareness or understanding of how to address the challenges of bias, even if they do recognise it as a problem in the first place.

She further noted that while companies regularly submit to, and publish the results of, independent financial audits, there is no widespread equivalent for algorithms.

“We need to change how we do technology,” she said. “I think the whole technological debate has been so geared by the Silicon Valley idea of ‘move fast, break things’ that when you break our fundamental rights, it doesn’t really matter.

“We need to start seeing technology as something that helps us solve problems. Right now, technology is like a hammer always looking for nails – ‘Let’s look for problems that could be solved with blockchain, let’s look for problems that we can solve with AI’ – actually, no, what problem do you have? And let’s look at the technologies that could help you solve that problem. But that’s a completely different way of thinking about technology than what we’ve done in the past 20 years.”

8. Court finds use of facial recognition technology by South Wales Police unlawful

In a landmark decision, the Court of Appeal ruled in August that South Wales Police’s (SWP) facial recognition deployments breached human rights and data protection laws .

The decision was made on the grounds that SWP’s use of the technology was “not in accordance” with citizens’ Article 8 privacy rights ; that it did not conduct an appropriate data protection impact assessment; and that it did not comply with its public sector equality duty to consider how its policies and practices could be discriminatory.

However, speaking to Computer Weekly at the time, Matrix Chambers barrister Tim James-Matthews said the problem the Court of Appeal ultimately found was an absence of regulation around how the technology was deployed, “as opposed to anything particular in the technology itself”.

He added: “What they said was that, essentially, South Wales Police hadn’t done the work of identifying and determining whether or not there were equalities implications in using the technology, and how they might guard against or protect from those.”

9. US lawmakers gear up for antitrust action against major technology companies

In the US, following a 16-month investigation into the competitive practices of Amazon, Apple, Facebook and Google, the Democratic majority of the House Judiciary Subcommittee on Antitrust, Commercial and Administrative Law published a report detailing their recommendations on how antitrust laws and enforcement can be changed “to address the rise and abuse of market power in the digital economy”.

They found that although the four corporations differed in important ways, the investigation into their business practices revealed common problems.

“First, each platform now serves as a gatekeeper over a key channel of distribution,” the report said. “By controlling access to markets, these giants can pick winners and losers throughout our economy. They not only wield tremendous power, but they also abuse it by charging exorbitant fees, imposing oppressive contract terms, and extracting valuable data from the people and businesses that rely on them.”

This echoed the opening remarks made by David Cicilline, chairman of the antitrust subcommittee, during its questioning of Facebook, Amazon, Apple and Google’s CEOs in July.

The report suggested imposing “structural separations and line-of-business restrictions” on the companies, which would respectively “prohibit a dominant intermediary from operating in markets that place the intermediary in competition with the firms dependent on its infrastructure… and generally limit the markets in which a dominant firm can engage”.

10. Congolese families contest technology firms’ attempt to dismiss cobalt mining deaths case

At the tail of 2019, Computer Weekly reported on a landmark legal case launched against five of the world’s largest multinational technology companies, which were accused by the families of dead or maimed child cobalt miners of knowingly benefiting from human rights abuses in the Democratic Republic of Congo (DRC).

The lawsuit against Alphabet, Apple, Dell, Microsoft and Tesla marked the first legal challenge of its kind against technology companies, many of which rely on their cobalt supply chains to power products such as electric cars, smartphones and laptops.

In August, the companies filed a joint motion to dismiss the case , largely on the grounds they did not have “requisite knowledge” of the abuses at the specific mining sites mentioned.

However, in the latest round of legal filings, the Congolese victims maintained that the companies “had specific knowledge of horrific conditions facing child miners in DRC cobalt mines from a number of sources.” Computer Weekly will continue to monitor the case.

Read more on IT legislation and regulation

computer ethics case study example

Public worried by police and companies sharing biometric data

computer ethics case study example

Amazon defends facial-recognition tech sale to FBI despite moratorium

SebastianKlovig Skelton

Scotland ‘sleepwalking’ to mass surveillance with DPDI Bill

computer ethics case study example

Newham Council rejects use of live facial-recognition tech by police

Businesses of the future will rely on workers with IT skills even more than they do today. Find out which jobs might be most in ...

Get guidance on how relevant cloud compliance standards are developed and tips on evaluating third-party providers' cloud ...

The Biden administration's regulatory efforts have defined the U.S. approach to climate over the last four years. That could ...

Evolve Bank & Trust confirmed that it was affected by a cybersecurity-related incident, but has not yet said whether the LockBit ...

Progress Software's MoveIt Transfer is under attack again, just one year after a Clop ransomware actor exploited a different ...

From help desk support personnel to network admin, learn about the multiple paths that can lead to becoming an effective and ...

This introduction explores eight network devices that are commonly used within enterprise network infrastructures, including ...

Organizations should create comprehensive work-from-home reimbursement plans that drive better network and internet connectivity ...

Experts at the Cisco Live 2024 conference discussed the future of AI in networks and how its use can help simplify network and ...

With many organizations developing or evaluating generative AI initiatives, HPE increased its commitment to the space through a ...

Cool air is expensive, and wasting it is inefficient. Maintaining hot and cool air separation maximizes cooling effectiveness, ...

A data center's UPS might not be overloaded. Check loads on the circuits and balance all three phases as closely as possible to ...

The data platform vendor's latest update targets GenAI development by enabling easier access to unstructured data, making ...

The tech giant updated its database with new features aimed at simplifying model and application development cost-effectively, ...

KPIs and metrics are necessary to measure the quality of data. Organizations can use the dimensions of data quality to establish ...

9.8 Case Studies of Ethics

 <  Free Open Study  >  

To understand how ethics affect professional actions, ethicists often study example situations. The remainder of this section consists of several representative examples. These cases are modeled after ones developed by Parker [PAR79] as part of the AFIPS/NSF study of ethics in computing and technology. Each case study is designed to bring out certain ethical points, some of which are listed following the case. You should reflect on each case, determining for yourself what the most influential points are. These cases are suitable for use in a class discussion, during which other values will certainly be mentioned. Finally, each case reaches no conclusion because each individual must assess the ethical situation alone. In a class discussion it may be appropriate to take a vote. Remember, however, that ethics are not determined by majority rule. Those siding with the majority are not "right," and the rest are not "wrong."

This case concerns deciding what is appropriate use of computer time. Use of computer time is a question both of access by one person and of availability of quality of service to others. The person involved is permitted to access computing facilities for a certain purpose. Many companies rely on an unwritten standard of behavior that governs the actions of people who have legitimate access to a computing system. The ethical issues involved in this case can lead to an understanding of that unwritten standard.

Dave works as a programmer for a large software company. He writes and tests utility programs such as compilers. His company operates two computing shifts: during the day program development and online applications are run; at night batch production jobs are completed. Dave has access to workload data and learns that the evening batch runs are complementary to daytime programming tasks ; that is, adding programming work during the night shift would not adversely affect performance of the computer to other users.

Dave comes back after normal hours to develop a program to manage his own stock portfolio. His drain on the system is minimal, and he uses very few expendable supplies , such as printer paper. Is Dave's behavior ethical?

Some of the ethical principles involved in this case are listed below.

. The company owns the computing resources and provides them for its own computing needs.

. Although unlikely , a flaw in Dave's program could adversely affect other users, perhaps even denying them service because of a system failure.

. If Dave's action is acceptable, it should also be acceptable for others to do the same. However, too many employees working in the evening could reduce system effectiveness.

. Dave does not know whether his action would be wrong or right if discovered by his company. If his company decided it was improper use, Dave could be punished.

What other issues are involved? Which principles are more important than others?

The utilitarian would consider the total excess of good over bad for all people. Dave receives benefit from use of computer time, although for this application the amount of time is not large. Dave has a possibility of punishment, but he may rate that as unlikely. The company is neither harmed nor helped by this. Thus, the utilitarian could argue that Dave's use is justifiable.

The universalism principle seems as if it would cause a problem because clearly if everyone did this, quality of service would degrade. A utilitarian would say that each new user has to weigh good and bad separately. Dave's use might not burden the machine, and neither might Ann's; but when Bill wants to use the machine, it is heavily enough used that Bill's use would affect other people.

Would it affect the ethics of the situation if any of the following actions or characteristics were considered ?

In this case the central issue is the individual's right to privacy. Privacy is both a legal and an ethical issue because of the pertinent laws discussed in the previous section.

Donald works for the county records department as a computer records clerk, where he has access to files of property tax records. For a scientific study, a researcher, Ethel, has been granted access to the numerical portion ”but not the corresponding names ”of some records.

Ethel finds some information that she would like to use, but she needs the names and addresses corresponding with certain properties. Ethel asks Donald to retrieve the names and addresses so she can contact these people for more information and for permission to do further study.

Should Donald release the names and addresses?

Here are some of the ethical principles involved in this case. What are other ethical principles? Which principles are subordinate to which others?

. Donald's job is to manage individual records, not to make determinations of appropriate use. Policy decisions should be made by someone of higher authority.

. The records are used for legitimate scientific study, not for profit or to expose sensitive data. (However, Ethel's access is authorized only for the numerical data, not for the private information relating property conditions to individuals.)

. Although he believes Ethel's motives are proper, Donald cannot guarantee that Ethel will use the data only to follow up on interesting data items.

. Had Ethel been intended to have names and addresses, they would have been given initially.

. Ethel has been granted permission to access parts of these records for research purposes, so she should have access to complete her research.

. Because Ethel has no authority to obtain names and addresses and because the names and addresses represent the confidential part of the data, Donald should deny Ethel's request for access.

A rule-deontologist would argue that privacy is an inherent good and that one should not violate the privacy of another. Therefore, Donald should not release the names.

We can consider several possible extensions to the scenario. These extensions probe other ethical issues involved in this case.

To show that ethics can be context dependent, let us consider some variations of the situation. Notice that these changes affect the domain of the problem, but not the basic question: access to personal data.

If the domain were medical records, the case would be covered by HIPAA, and so we would first consider a legal issue, not an ethical one. Notice, however, how the case changes subtly depending on the medical condition involved. You may reach one conclusion if the records deal with "ordinary" conditions (colds, broken legs, muscle injuries), but a different conclusion if the cases are for sexually transmitted diseases or AIDS. You may also reach a different conclusion if the research involves genetic conditions of which the subject may be unaware (for example, being a carrier for Huntington's disease or hemophilia).

But change the context once more, and consider web surfing habits. If Donald works for an Internet service provider and could determine all the web sites a person had visited, would that be fair to disclose?

This case addresses issues related to the effect of one person's computation on other users. This situation involves people with legitimate access, so standard access controls should not exclude them. However, because of the actions of some, other people are denied legitimate access to the system. Thus, the focus of this case is on the rights of all users.

Charlie and Carol are students at a university in a computer science program. Each writes a program for a class assignment. Charlie's program happens to uncover a flaw in a compiler that ultimately causes the entire computing system to fail; all users lose the results of their current computation. Charlie's program uses acceptable features of the language; the compiler is at fault. Charlie did not suspect his program would cause a system failure. He reports the program to the computing center and tries to find ways to achieve his intended result without exercising the system flaw.

The system continues to fail periodically, for a total of ten times (beyond the first failure). When the system fails, sometimes Charlie is running a program, but sometimes Charlie is not. The director contacts Charlie, who shows all of his program versions to the computing center staff. The staff concludes that Charlie may have been inadvertently responsible for some, but not all, of the system failures, but that his latest approach to solving the assigned problem is unlikely to lead to additional system failures.

On further analysis, the computing center director notes that Carol has had programs running each of the first eight (of ten) times the system failed. The director uses administrative privilege to inspect Carol's files and finds a file that exploits the same vulnerability as did Charlie's program. The director immediately suspends Carol's account, denying Carol access to the computing system. Because of this, Carol is unable to complete her assignment on time, she receives a D in the course, and she drops out of school.

In this case the choices are intentionally not obvious. The situation is presented as a completed scenario, but in studying it you are being asked to suggest alternative actions the players could have taken. In this way, you build a repertoire of actions that you can consider in similar situations that might arise.

In this case we consider who owns programs: the programmer, the employer, the manager, or all. From a legal standpoint, most rights belong to the employer, as presented earlier in this chapter. However, this case expands on that position by presenting several competing arguments that might be used to support positions in this case. As described in the previous section, legal controls for secrecy of programs can be complicated, time consuming, and expensive to apply. In this case we search for individual ethical controls that can prevent the need to appeal to the legal system.

Greg is a programmer working for a large aerospace firm, Star Computers, which works on many government contracts; Cathy is Greg's supervisor. Greg is assigned to program various kinds of simulations.

To improve his programming abilities , Greg writes some programming tools, such as a cross-reference facility and a program that automatically extracts documentation from source code. These are not assigned tasks for Greg; he writes them independently and uses them at work, but he does not tell anyone about them. Greg has written them in the evenings, at home, on his personal computer.

Greg decides to market these programming aids by himself. When Star's management hears of this, Cathy is instructed to tell Greg that he has no right to market these products since, when he was employed, he signed a form stating that all inventions become the property of the company. Cathy does not agree with this position because she knows that Greg has done this work on his own. She reluctantly tells Greg that he cannot market these products. She also asks Greg for a copy of the products.

Cathy quits working for Star and takes a supervisory position with Purple Computers, a competitor of Star. She takes with her a copy of Greg's products and distributes it to the people who work with her. These products are so successful that they substantially improve the effectiveness of her employees, and Cathy is praised by her management and receives a healthy bonus. Greg hears of this, and contacts Cathy, who contends that because the product was determined to belong to Star and because Star worked largely on government funding, the products were really in the public domain and therefore they belonged to no one in particular.

This case certainly has major legal implications. Probably everyone could sue everyone else and, depending on the amount they are willing to spend on legal expenses, they could keep the cases in the courts for several years . Probably no judgment would satisfy all.

Let us set aside the legal aspects and look at the ethical issues. We want to determine who might have done what, and what changes might have been possible to prevent a tangle for the courts to unscramble .

First, let us explore the principles involved.

. What are the respective rights of Greg, Cathy, Star, and Purple?

. What gives Greg, Cathy, Star, and Purple those rights? What principles of fair play, business, property rights, and so forth are involved in this case?

. Which of these principles are inferior to which others? Which ones take precedence? (Note that it may be impossible to compare two different rights, so the outcome of this analysis may yield some rights that are important but that cannot be ranked first, second, third.)

. What additional facts do you need in order to analyze this case? What assumptions are you making in performing the analysis?

Next , we want to consider what events led to the situation described and what alternative actions could have prevented the negative outcomes .

In this case, we consider the issue of access to proprietary or restricted resources. Like the previous one, this case involves access to software. The focus of this case is the rights of a software developer in contrast with the rights of users, so this case concerns determining legitimate access rights.

Suzie owns a copy of G-Whiz, a proprietary software package she purchased legitimately. The software is copyrighted , and the documentation contains a license agreement that says that the software is for use by the purchaser only. Suzie invites Luis to look at the software to see if it will fit his needs. Luis goes to Suzie's computer and she demonstrates the software to him. He says he likes what he sees, but he would like to try it in a longer test.

So far the actions have all been ethically sound. The next steps are where ethical responsibilities arise. Take each of the following steps as independent; that is, do not assume that any of the other steps has occurred in your analysis of one step.

For each of these extensions, describe who is affected, which ethical issues are involved, and which principles override which others.

In previous cases, we have dealt with people acting in situations that were legal or, at worst, debatable. In this case, we consider outright fraud, which is illegal. However, the case really concerns the actions of people who are asked to do fraudulent things.

Patty works as a programmer in a corporation. David, her supervisor, tells her to write a program to allow people to post entries directly to the company's accounting files ("the books"). Patty knows that ordinarily programs that affect the books involve several steps, all of which have to balance. Patty realizes that with the new program, it will be possible for one person to make changes to crucial amounts, and there will be no way to trace who made these changes, with what justification, or when.

Patty raises these concerns to David, who tells her not to be concerned , that her job is simply to write the programs as he specifies. He says that he is aware of the potential misuse of these programs, but he justifies his request by noting that periodically a figure is mistakenly entered in the books and the company needs a way to correct the inaccurate figure.

First, let us explore the options Patty has. If Patty writes this program, she might be an accomplice to fraud. If she complains to David's superior, David or the superior might reprimand or fire her as a troublemaker. If she refuses to write the program, David can clearly fire her for failing to carry out an assigned task. We do not even know that the program is desired for fraudulent purposes; David suggests an explanation that is not fraudulent.

She might write the program but insert extra code that creates a secret log of when the program was run, by whom, and what changes were made. This extra file could provide evidence of fraud, or it might cause trouble for Patty if there is no fraud but David discovers the secret log.

At this point, here are some of the ethical issues involved.

The act-deontologist would say that truth is good. Therefore, if Patty thought the purpose of the program was to deceive, writing it would not be a good act. (If the purpose were for learning or to be able to admire beautiful code, then writing it might be justifiable.)

A more useful analysis is from the perspective of the utilitarian. To Patty, writing the program brings possible harm for being an accomplice to fraud, with the gain of having cooperated with her manager. She has a possible item with which to blackmail David, but David might also turn on her and say the program was her idea. On balance, this option seems to have a strong negative slant.

By not writing the program her possible harm is being fired . However, she has a potential gain by being able to "blow the whistle " on David. This option does not seem to bring her much good, either. But fraudulent acts have negative consequences for the stockholders , the banks, and other innocent employees. Not writing the program brings only personal harm to Patty, which is similar to the harm described earlier. Thus, it seems as if not writing the program is the more positive option.

There is another possibility. The program may not be for fraudulent purposes. If so, then there is no ethical conflict. Therefore, Patty might try to determine whether David's motives are fraudulent.

For our next case, we consider responsibility for accuracy or integrity of information. Again, this is an issue addressed by database management systems and other access control mechanisms. However, as in previous cases, the issue here is access by an authorized user, so the controls do not prevent access.

Emma is a researcher at an institute where Paul is a statistical programmer. Emma wrote a grant request to a cereal manufacturer to show the nutritional value of a new cereal, Raw Bits. The manufacturer funded Emma's study. Emma is not a statistician. She has brought all of her data to Paul to ask him to perform appropriate analyses and to print reports for her to send to the manufacturer. Unfortunately, the data Emma has collected seem to refute the claim that Raw Bits is nutritious, and, in fact, they may indicate that Raw Bits is harmful .

Paul presents his analyses to Emma but also indicates that some other correlations could be performed that would cast Raw Bits in a more favorable light. Paul makes a facetious remark about his being able to use statistics to support either side of any issue.

Clearly, if Paul changed data values in this study he would be acting unethically. But is it any more ethical for him to suggest analyzing correct data in a way that supports two or more different conclusions? Is Paul obligated to present both the positive and the negative analyses? Is Paul responsible for the use to which others put his program results?

If Emma does not understand statistical analysis, is she acting ethically in accepting Paul's positive conclusions? His negative conclusions? Emma suspects that if she forwards negative results to the manufacturer, they will just find another researcher to do another study. She suspects that if she forwards both sets of results to the manufacturer, they will publicize only the positive ones. What ethical principles support her sending both sets of data? What principles support her sending just the positive set? What other courses of action has she?

What behavior is acceptable in cyberspace ? Who owns or controls the Internet? Does malicious or nonmalicious intent matter? Legal issues are involved in the answers to these questions, but as we have pointed out previously, laws and the courts cannot protect everything, nor should we expect them to. In this final case study we consider ethical behavior in a shared use computing environment, such as the Internet. The questions are similar to "what behavior is acceptable in outer space?" or "who owns the oceans?"

Goli is a computer security consultant; she enjoys the challenge of finding and fixing security vulnerabilities. Independently wealthy, she does not need to work, and so she has ample spare time in which to test the security of systems.

In her spare time, Goli does three things: First, she aggressively attacks commercial products for vulnerabilities. She is quite proud of the tools and approach she has developed, and she is quite successful at finding flaws. Second, she probes accessible systems on the Internet, and when she finds vulnerable sites, she contacts the owners to offer her services repairing the problems. Finally, she is a strong believer in high-quality pastry, and she plants small programs to slow performance in the web sites of pastry shops that do not use enough butter in their pastries. Let us examine these three actions in order.

We have already described a current debate regarding the vulnerability reporting process. Now let us explore the ethical issues involved in that debate.

Clearly from a rule-based ethical theory, attackers are wrong to perform malicious attacks. The appropriate theory seems to be one of consequence: who is helped or hurt by finding and publicizing flaws in products? Relevant parties are attackers , the vulnerability finder, the vendor, and the using public. Notoriety or credit for finding the flaw is a small interest. And the interests of the vendor (financial, public relations) are less important than the interests of users to have secure products. But how are the interests of users best served ?

helps users assess the seriousness of the vulnerability and apply appropriate protection. But it also gives attackers more information with which to formulate attacks. Early full disclosure ”before the vendor has countermeasures ready ”may actually harm users by leaving them vulnerable to a now widely known attack.

” the general nature of the vulnerability but not a detailed exploitation scenario ”may forestall attackers. One can argue that the vulnerability details are there to be discovered; when a vendor announces a patch for an unspecified flaw in a product, the attackers will test that product aggressively and study the patch carefully to try to determine the vulnerability. Attackers will then spread a complete description of the vulnerability to other attackers through an underground network, and attacks will start against users who may not have applied the vendor's fix.

. Perhaps users are best served by a scheme in which every so often new code is released, sometimes fixing security vulnerabilities, sometimes fixing things that are not security-related, and sometimes adding new features. But without a sense of significance or urgency, users may not install this new code.

What are the ethical issues involved in searching for vulnerabilities? Again, the party of greatest interest is the user community and the good or harm that can come from the search.

On the positive side, searching may find vulnerabilities. Clearly, it would be wrong for Goli to report vulnerabilities that were not there, simply to get work, and it would also be wrong to report some but not all vulnerabilities, to be able to use the additional vulnerabilities as future leverage against the client.

But suppose Goli does a diligent search for vulnerabilities and reports them to the potential client. Is that not similar to a service station owner's advising you that a headlight is not operating when you take your car in for gasoline? Not quite, you might say. The headlight flaw can be seen without any possible harm to your car; probing for vulnerabilities might cause your system to fail.

The ethical question seems to be which is greater: the potential for good or the potential for harm? And if the potential for good is stronger, how much stronger does it need to be to override the risk of harm?

This case is also related to the common practice of ostensible nonmalicious probing for vulnerabilities: Hackers see if they can access your system without your permission, perhaps by guessing a password. Spafford [SPA98] points out that many crackers simply want to look around, without damaging anything. As discussed in Sidebar 9-6, Spafford compares this seemingly innocent activity with entry into your house when the door is unlocked. Even when done without malicious intent, cracking can be a serious offense; at its worst, it has caused millions of dollars in damage. Although crackers are prosecuted severely with harsh penalties, cracking continues to be an appealing crime, especially to juveniles.

Finally, consider Goli's interfering with operation of web sites whose actions she opposes. We have purposely phrased the issue in a situation that arouses perhaps only a few gourmands and p ¢tissiers. We can dismiss the interest of the butter fans as an insignificant minority on an insignificant issue. But you can certainly think of many other issues that have brought on wars. (See Denning's excellent article on cybercriminals [DEN99a] for real examples of politically motivated computer activity.)

Many people argue that cracking is an acceptable practice because lack of protection means that the owners of systems or data do not really value them. Spafford [SPA98] questions this logic by using the analogy of entering a house.

Consider the argument that an intruder who does no harm and makes no changes is simply learning about how computer systems operate . "Most of these people would never think to walk down a street, trying every door to find one unlocked, then search through the drawers or the furniture inside. Yet, these same people seem to give no second thought to making repeated attempts at guessing passwords to accounts they do not own, and once onto a system, browsing through the files on disk." How would you feel if you knew your home had been invaded, even if no harm was done?

Spafford notes that breaking into a house or a computer system constitutes trespassing. To do so in an effort to make security vulnerabilities more visible is "presumptuous and reprehensible." To enter either a home or a computer system in an unauthorized way, even with benign intent, can lead to unintended consequences. "Many systems have been damaged accidentally by ignorant (or careless) intruders."

The ethical issues abound in this scenario. Some people will see the (butter) issue as one of inherent good, but is butter use one of the fundamental good principles, such as honesty or fairness or not doing harm to others? Is there universal agreement that butter use is good? Probably there will be a division of the world into the butter advocates ( x %), the unrestricted pastry advocates ( y %), and those who do not take a position ( z %). By how much does x have to exceed y for Goli's actions to be acceptable? What if the value of z is large? Greatest good for the greatest number requires a balance among these three percentages and some measure of benefit or harm.

Is butter use so patently good that is justifies harm to those who disagree ? Who is helped and who suffers? Is the world helped if only good, but more expensive, pastries are available, so poor people can no longer afford pastry? Suppose we could determine that 99.9 percent of people in the world agreed that butter use was a good thing. Would that preponderance justify overriding the interests of the other 0.1 percent?

Codes of Ethics

Because of ethical issues such as these, various computer groups have sought to develop codes of ethics for their members . Most computer organizations, such as the Association for Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers (IEEE), and the Data Processing Management Association (DPMA), are voluntary organizations. Being a member of one of these organizations does not certify a level of competence, responsibility, or experience in computing. For these reasons, codes of ethics in these organizations are primarily advisory. Nevertheless, these codes are fine starting points for analyzing ethical issues.

The IEEE has produced a code of ethics for its members. The IEEE is an organization of engineers, not limited to computing. Thus, their code of ethics is a little broader than might be expected for computer security, but the basic principles are applicable in computing situations. The IEEE Code of Ethics is shown in Figure 9-1.

Figure 9-1. IEEE Code of Ethics. (Reprinted courtesy of the Institute of Electrical and Electronics Engineers 1996.)

computer ethics case study example

The ACM code of ethics recognizes three kinds of responsibilities of its members: general moral imperatives, professional responsibilities, and leadership responsibilities, both inside the association and in general. The code of ethics has three sections (plus a fourth commitment section), as shown in Figure 9-2.

Figure 9-2. ACM Code of Ethics and Professional Conduct. (Reprinted courtesy of the Association for Computing Machinery 1993.)

Computer ethics institute.

The Computer Ethics Institute is a nonprofit group that aims to encourage people to consider the ethical aspects of their computing activities. The organization has been in existence since the mid-1980s, founded as a joint activity of IBM, the Brookings Institution, and the Washington Theological Consortium. The group has published its ethical guidance as ten commandments of computer ethics, listed in Figure 9-3.

Figure 9-3. The Ten Commandments of Computer Ethics. (Reprinted with permission, Computer Ethics Institute, Washington, D.C.)

Many organizations take ethics seriously and produce a document guiding the behavior of its members or employees. Some corporations require new employees to read its code of ethics and sign a promise to abide by it. Others, especially at universities and research centers, have special boards that must approve proposed research and ensure that projects and team members act ethically. As an individual professional, it may be useful for you to review these codes of ethics and compose a code of your own, reflecting your ideas about appropriate behavior in likely situations. A code of ethics can help you assess situations quickly and act in a consistent, comfortable, and ethical manner.

Conclusion of Computer Ethics

In this study of ethics, we have tried not to decide right and wrong, or even to brand certain acts as ethical or unethical. The purpose of this section is to stimulate thinking about ethical issues concerned with confidentiality, integrity, and availability of data and computations .

The cases presented show complex, conflicting ethical situations. The important first step in acting ethically in a situation is to obtain the facts, ask about any uncertainties, and acquire any additional information needed. In other words, first one must understand the situation.

The second step is to identify the ethical principles involved. Honesty, fair play, proper compensation, and respect for privacy are all ethical principles. Sometimes these conflict, and then we must determine which principles are more important than others. This analysis may not lead to one principle that obviously overshadows all others. Still, a ranking to identify the major principles involved is needed.

The third step is choosing an action that meets these ethical principles. Making a decision and taking action are difficult, especially if the action has evident negative consequences. However, taking action based on a personal ranking of principles is necessary. The fact that other equally sensible people may choose a different action does not excuse you from taking some action.

This section is not trying to force the development of rigid, inflexible principles. Decisions may vary, based on fine differences between two situations. Or a person's views can change over time in response to experience and changing context. Learning to reason about ethical situations is not quite the same as learning "right" from "wrong." Terms such as right and wrong or good and bad imply a universal set of values. Yet we know that even widely accepted principles are overridden by some people in some situations. For example, the principle of not killing people may be violated in the case of war or capital punishment. Few, if any, values are held by everyone or in all cases. Therefore, our purpose in introducing this material has been to stimulate you to recognize and think about ethical principles involved in cases related to computer security. Only by recognizing and analyzing principles can you act consistently, thoughtfully, and responsibly.

Security in Computing

  • A Practical Approach to Documentation and Configuration Status Accounting
  • Configuration Verification and Audit
  • Appendix C Sample Data Dictionary
  • Appendix L Software Requirements Changes
  • Appendix Q Problem Trouble Report (PTR)
  • Control of Flow
  • Super-Charged Client-Side Scripting
  • Windows Script Components
  • Adding VBScript to Your VB Applications
  • Appendix D Visual Basic Constants Supported in VBScript
  • Logging to a File Quickly
  • Reading Unified Logged Data
  • Keeping the Rules Up to Date
  • Carrying Out Statistical Analysis
  • Passive OS Fingerprinting
  • Same Database, Multiple Windows
  • Bringing It All Together
  • Layout Basics
  • Format the Number/Date/Time/Graphic
  • Working with Records
  • The World Of Java
  • The Java Onion
  • Appendix A Using Java on the AS/400
  • Appendix B Mixing RPG And Java
  • Cisco CallManager Architecture
  • Case Studies
  • H.323 Endpoint Devices
  • H.323 Gateways
  • Application Solutions

Breadcrumbs Section. Click here to navigate to respective pages.

A Case Study for Computer Ethics in Context

A Case Study for Computer Ethics in Context

DOI link for A Case Study for Computer Ethics in Context

Get Citation

Aimed at addressing the difficulties associated with teaching often abstract elements of technical ethics, this book is an extended fictional case study into the complexities of technology and social structures in complex organizations. Within this case study, an accidental discovery reveals that the algorithms of Professor John Blackbriar are not quite what they were purported to be. Over the course of 14 newspaper articles, a nebula of professional malpractice and ethical compromise is revealed, ultimately destroying the career of a prominent, successful academic.

The case study touches on many topics relevant to ethics and professional conduct in computer science, and on the social structures within which computer science functions. Themes range from the growing influence of generative AI to the difficulties in explaining complex technical processes to a general audience, also touching on the environmental consequences of blockchain technology and the disproportionate gender impacts of Coronavirus. Each new revelation in the case study unveils further layers of complexity and compromise, leading to new technical and social issues that need to be addressed.

Directly aimed at making ethics in the digital age accessible through the use of real-world examples, this book appeals to computer science students at all levels of the educational system, as well as making an excellent accompaniment to lecturers and course convenors alike.

TABLE OF CONTENTS

Chapter 1 | 11  pages, about this case study, chapter 2 | 22  pages, using the case study, chapter 3 | 19  pages, a scandal in academia, chapter 4 | 26  pages, student suspensions at scandal-ridden university, chapter 5 | 15  pages, multimillion-pound consequences for research fiddle, chapter 6 | 19  pages, students speak out, chapter 7 | 22  pages, leaked minute lays bare university culture, chapter 8 | 16  pages, brokenbriar affair heating up, chapter 9 | 16  pages, senior university members implicated in growing scandal, chapter 10 | 17  pages, drunken professor lashes out in twitter storm, chapter 11 | 30  pages, culture of fear and nepotism at university, chapter 12 | 21  pages, witch-hunts at the university, chapter 13 | 20  pages, hacker is postgraduate student, chapter 14 | 12  pages, clean-out at scandal-linked journals, chapter 15 | 14  pages, student journalism outs senior academics, chapter 16 | 14  pages, resignations all around at the university of dunglen, chapter 17 | 32  pages, postscripts, chapter 18 | 2  pages, acknowledgements.

  • Privacy Policy
  • Terms & Conditions
  • Cookie Policy
  • Taylor & Francis Online
  • Taylor & Francis Group
  • Students/Researchers
  • Librarians/Institutions

Connect with us

Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG © 2024 Informa UK Limited

OEC logo

Site Search

  • How to Search
  • Advisory Group
  • Editorial Board
  • OEC Fellows
  • History and Funding
  • Using OEC Materials
  • Collections
  • Research Ethics Resources
  • Ethics Projects
  • Communities of Practice
  • Get Involved
  • Submit Content
  • Open Access Membership
  • Become a Partner

Ethics and Professional Responsibility in Computing

Computing professionals have ethical obligations to clients, employers, other professionals, and the public, in fulfilling their professional responsibilities. These obligations are expressed in codes of ethics, which can be used to make decisions about ethical problems.

ENCYCLOPEDIA OF COMPUTER SCIENCE AND ENGINEERING (WILEY)

Ethics and professional responsibility in computing 1.

Michael C. Loui 2 Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign

Keith W. Miller 3 Department of Computer Science University of Illinois at Springfield

August 23, 2007

Abstract. Computing professionals have ethical obligations to clients, employers, other professionals, and the public, in fulfilling their professional responsibilities. These obligations are expressed in codes of ethics, which can be used to make decisions about ethical problems.

Key Words: ethics, profession, moral responsibility, liability, trust, informed consent, peer review, whistle-blowing, code of ethics, ethical decision-making

1 The views, opinions, and conclusions expressed in this article are not necessarily those of the University of Illinois or the National Science Foundation.

2 Address for correspondence: Coordinated Science Laboratory, 1308 W. Main St., Urbana, IL 61801, USA. Telephone: (217) 333-2595. E-mail: loui AT uiuc DOT edu. Supported by the National Science Foundation under Grant EEC-0628814.

3 Address for correspondence: UIS, CSC, UHB 3100; One University Plaza; Springfield, IL 62703, USA. Telephone: (217) 206-7327. E-mail: miller DOT keith AT uis DOT edu.

1. Introduction

Computing professionals perform a variety of tasks: they write specifications for new computer systems, they design instruction pipelines for superscalar processors, they diagnose timing anomalies in embedded systems, they test and validate software systems, they restructure the back-end database of an inventory system, they analyze packet traffic in a local area network, and they recommend security policies for a medical information system. Computing professionals are obligated to perform these tasks conscientiously, because their decisions affect the performance and functionality of computer systems, which in turn affect the welfare of the systems’ users directly and that of other people less directly. For example, the software that controls the automatic transmission of an automobile should minimize gasoline consumption, and more important, ensure the safety of the driver, any passengers, other drivers, and pedestrians.

The obligations of computing professionals are similar to the obligations of other technical professionals, such as civil engineers. Taken together, these professional obligations are called professional ethics . Ethical obligations have been studied by philosophers and articulated by religious leaders for many years. Within the discipline of philosophy, ethics encompasses the study of the actions that a responsible individual ought to choose, the values that an honorable individual ought to espouse, and the character that a virtuous individual ought to have. For example, everyone ought to be honest, fair, kind, civil, respectful, and trustworthy. Besides these general obligations that everyone shares, professionals have additional obligations that arise from the responsibilities of their professional work and their relationships with clients, employers, other professionals, and the public.

The ethical obligations of computing professionals go beyond complying with laws or regulations; laws often lag behind advances in technology. For example, before the passage of the Electronic Communications Privacy Act of 1986 in the United States, government officials did not require a search warrant to collect personal information transmitted over computer communication networks. Nevertheless, even in the absence of a privacy law before 1986, computing professionals should have been aware of the obligation to protect the privacy of personal information.

2. What Is a Profession?

Computing professionals include hardware designers, software engineers, database administrators, system analysts, and computer scientists. In what ways do these occupations resemble recognized professions such as medicine, law, engineering, counseling, and accounting? In what ways do computing professions resemble occupations that are not traditionally thought of as professions, such as plumbers, fashion models, and sales clerks?

Professions that exhibit certain characteristics are called strongly differentiated professions (1). These are the professions such as physicians and lawyers, who have special rights and responsibilities. The defining characteristics of a strongly differentiated profession are specialized knowledge and skills, systematic research, professional autonomy, a robust professional association, and a well defined social good associated with the profession.

Members of a strongly differentiated profession have specialized knowledge and skills, often called a “body of knowledge,” gained through formal education and practical experience. Although plumbers also have special knowledge and skills, education in the trades such as plumbing emphasizes apprenticeship training rather than formal education. An educational program in a professional school teaches students the theoretical basis of a profession, which is difficult to learn without formal education. A professional school also socializes students to the values and practices of the profession. Engineering schools teach students to value efficiency and to reject shoddy work. Medical schools teach students to become physicians, and law schools teach future attorneys. Because professional work has a significant intellectual component, entry into a profession often requires a post-baccalaureate degree such as the M.S.W. (Master of Social Work) or the Psy.D. (Doctor of Psychology).

Professionals value the expansion of knowledge through systematic research—they do not rely exclusively on the transmission of craft traditions from one generation to the next. Research in a profession is conducted by academic members of the profession, and sometimes by practitioner members too. Academic physicians, for example, conduct medical research. Because professionals understand that professional knowledge always advances, professionals should also engage in continuing education by reading publications and attending conferences. Professionals should share general knowledge of their fields, rather than keeping secrets of a guild. Professionals are obligated, however, to keep specific information about clients confidential.

Professionals tend to have clients , not customers . Whereas a sales clerk should try to satisfy the customer’s desires , the professional should try to meet the client’s needs (consistent with the welfare of the client and the public). For example, a physician should not give a patient a prescription for barbiturates just because the patient wants the drugs, but only if the patient’s medical condition warrants the prescription.

Because professionals have specialized knowledge, clients cannot fully evaluate the quality of services provided by professionals. Only other members of a profession, the professional’s peers, can sufficiently determine the quality of professional work. The principle of peer review underlies accreditation and licensing activities: members of a profession evaluate the quality of an educational program for accreditation, and they set the requirements for the licensing of individuals. For example, in the United States, a lawyer must pass a state’s bar exam to be licensed to practice in that state. (Most states have reciprocity arrangements—a professional license granted by one state is recognized by other states.) The license gives professionals legal authority and privileges that are not available to unlicensed individuals. For example, a licensed physician may legitimately prescribe medications and perform surgery, activities that should not be performed by people who are not medical professionals.

Through accreditation and licensing, the public cedes control over a profession to members of the profession. In return for this autonomy, the profession promises to serve the public good. Medicine is devoted to advancing human health, law to the pursuit of justice, engineering to the economical construction of safe and useful objects. As an example of promoting the public good over the pursuit of self interest, professionals are expected to provide services to some indigent clients without charge. For instance, physicians volunteer at free clinics, and they serve in humanitarian missions to developing countries. Physicians and nurses are expected to render assistance in cases of medical emergency—for instance, when a train passenger suffers a heart attack. In sum, medical professionals have special obligations that those who are not medical professionals do not have.

The purposes and values of a profession, including its commitment to a public good, are expressed by its code of ethics. A fortiori, the creation of a code of ethics is one mark of the transformation of an occupation into a profession.

A profession’s code of ethics is developed and updated by a national or international professional association. This association publishes periodicals and hosts conferences to enable professionals to continue their learning and to network with other members of the profession. The association typically organizes the accreditation of educational programs and the licensing of individual professionals.

Do computing professions measure up to these criteria for a strongly differentiated profession? To become a computing professional, an individual must acquire specialized knowledge about discrete algorithms and relational database theory, and specialized skills such as software development techniques and digital system design. Computing professionals usually learn this knowledge and acquire these skills by earning a baccalaureate degree in computer science, computer engineering, information systems, or a related field. As in engineering, a bachelor’s degree currently suffices for entry to the computing professions. The knowledge base for computing expands through research in computer science conducted in universities and in industrial and government laboratories.

Like electrical engineers, most computing professionals work for employers, who might not be the professionals’ clients. For example, a software engineer might develop application software that controls a kitchen appliance; the engineer’s employer might be different from the appliance manufacturer. Furthermore, the software engineer should prevent harm to the ultimate users of the appliance, and others who might be affected by the appliance. Thus, the computing professional’s relationship with a client and with the public might be indirect.

The obligations of computing professionals to clients, employers, and the public are expressed in several codes of ethics. Section 5 below reviews two codes that apply to computing professionals.

Although the computing professions meet many criteria of other professions, they are deficient in significant ways. Unlike academic programs in engineering, relatively few academic programs in computing are accredited. Furthermore, in the United States, computing professionals can not be licensed, except that software engineers can be licensed in Texas. As of this writing, the Association for Computing Machinery (ACM) has reaffirmed its opposition to state-sponsored licensing of individuals (2). Computing professionals may earn proprietary certifications offered by corporations such as Cisco, Novell, Sun, and Microsoft. In the United States, the American Medical Association dominates the medical profession, and the American Bar Association dominates the legal profession, but no single organization defines the computing profession. Instead, there are multiple distinct organizations, including the ACM, the Institute of Electrical and Electronics Engineers (IEEE) Computer Society, and the Association of Information Technology Professionals (AITP). Although these organizations cooperate on some projects, they remain largely distinct, with separate publications and codes of ethics.

Regardless of whether computing professions are strongly differentiated, computing professionals have important ethical obligations, as explained in the remainder of this article.

3. What Is Moral Responsibility in Computing?

In the early 1980s, Atomic Energy of Canada Limited (AECL) manufactured and sold a cancer radiation treatment machine called the Therac-25, which relied on computer software to control its operation. Between 1985 and 1987, the Therac-25 caused the deaths of three patients and serious injuries to three others (3). Who was responsible for the accidents? The operator who administered the massive radiation overdoses, which produced severe burns? The software developers who wrote and tested the control software, which contained several serious errors?

The system engineers who neglected to install the backup hardware safety mechanisms that had been used in previous versions of the machine? The manufacturer, AECL? Government agencies? We can use the Therac-25 case to distinguish between four different kinds of responsibility (4, 5).

Causal responsibility . Responsibility can be attributed to causes: for example, “the tornado was responsible for damaging the house.” In the Therac-25 case, the proximate cause of each accident was the operator, who started the radiation treatment. But just as the weather cannot be blamed for a moral failing, the Therac-25 operators cannot be blamed because they followed standard procedures, and the information displayed on the computer monitors was cryptic and misleading.

Role responsibility . An individual who is assigned a task or function is considered the responsible person for that role. In this sense, a foreman in a chemical plant may be responsible for disposing of drums of toxic waste, even if a forklift operator actually transfers the drums from the plant to the truck. In the Therac-25 case, the software developers and system engineers were assigned the responsibility of designing the software and hardware of the machine. Insofar as their designs were deficient, they were responsible for those deficiencies because of their roles. Even if they had completed their assigned tasks, however, their role responsibility may not encompass the full extent of their professional responsibilities.

Legal responsibility . An individual or an organization can be legally responsible, or liable, for a problem. That is, the individual could be charged with a crime, or the organization could be liable for damages in a civil lawsuit. Similarly, a physician can be sued for malpractice. In the Therac-25 case, AECL could have been sued. One kind of legal responsibility is strict liability : if a product injures someone, then manufacturer of the product can be found liable for damages in a lawsuit, even if the product met all applicable safety standards and the manufacturer did nothing wrong. The principle of strict liability encourages manufacturers to be careful, and it provides a way to compensate victims of accidents.

Moral responsibility . Causal, role, and legal responsibilities tend to be exclusive: if one individual is responsible, then another is not. In contrast, moral responsibility tends to be shared: many engineers are responsible for the safety of the products that they design, not just a designated safety engineer. Furthermore, rather than assign blame for a past event, moral responsibility focuses on what individuals should do in the future. In the moral sense, responsibility is a virtue: a “responsible person” is careful, considerate, and trustworthy; an “irresponsible person” is reckless, inconsiderate, and untrustworthy.

Responsibility is shared whenever multiple individuals collaborate as a group, such as a software development team. When moral responsibility is shared, responsibility is not atomized to the point at which no one in the group is responsible. Rather, each member of the group is accountable to the other members of the group and to those whom the group’s work might affect, both for the individual’s own actions and for the effects of their collective effort. For example, suppose a computer network monitoring team has made mistakes in a complicated statistical analysis of network traffic data, and these mistakes have changed the interpretation of the reported results. If the team members do not reanalyze the data themselves, they have an obligation to seek the assistance of a statistician who can analyze the data correctly. Different team members might work with the statistician in different ways, but they should hold each other accountable for their individual roles in correcting the mistakes. Finally, the team has a collective moral responsibility to inform readers of the team’s initial report about the mistakes and the correction.

Moral responsibility for recklessness and negligence is not mitigated by the presence of good intentions or by the absence of bad consequences. Suppose a software tester neglects to sufficiently test a new module for a telephone switching system, and the module fails. Although the subsequent telephone service outages are not intended, the software tester is morally responsible for the harms caused by the outages. Suppose a hacker installs a keystroke logging program in a deliberate attempt to steal passwords at a public computer. Even if the program fails to work, the hacker is still morally responsible for attempting to invade the privacy of users.

An individual can be held morally responsible both for acting and for failing to act. For example, a hardware engineer might notice a design flaw that could result in a severe electrical shock to someone who opens a personal computer system unit to replace a memory chip. Even if the engineer is not specifically assigned to check the electrical safety of the system unit, the engineer is morally responsible for calling attention to the design flaw, and the engineer can be held accountable for failing to act.

Computing systems often obscure accountability (5). In particular, in an embedded system such as the Therac-25, the computer that controls the device is hidden. Computer users seem resigned to accepting defects in computers and software that cause intermittent crashes and losses of data. Errors in code are called “bugs,” regardless of whether they are minor deficiencies or major mistakes that could cause fatalities. In addition, because computers appear to act autonomously, people tend to blame the computers themselves for failing, instead of the professionals who designed, programmed, and produced the computers.

4. What Are the Responsibilities of Computing Professionals?

Responsibilities to clients and users.

Whether a computing professional works as a consultant to an individual or as an employee in a large organization, the professional is obligated to perform assigned tasks competently, according to professional standards. These professional standards include not only attention to technical excellence but also concern for the social effects of computers on operators, users, and the public. When assessing the capabilities and risks of computer systems, the professional must be candid: the professional must report all relevant findings honestly and accurately. When designing a new computer system, the professional must consider not only the specifications of the client, but also how the system might affect the quality of life of users and others. For example, a computing professional who designs an information system for a hospital should allow speedy access by physicians and nurses, yet protect patients’ medical records from unauthorized access; the technical requirement to provide fast access may conflict with the social obligation to ensure patients’ privacy.

Computing professionals enjoy considerable freedom in deciding how to meet the specifications of a computer system. Provided that they meet the minimum performance requirements for speed, reliability, and functionality, within an overall budget, they may choose to invest resources to decrease the response time rather than to enhance a graphical user interface, or vice versa. Because choices involve tradeoffs between competing values, computing professionals should identify potential biases in their design choices (6). For example, the designer of a search engine for an online retailer might choose to display the most expensive items first. This choice might favor the interest of the retailer, to maximize profit, over the interest of the customer, to minimize cost.

Even moderately large software artifacts (computer programs) are inherently complex and error-prone. Furthermore, software is generally becoming more complex. It is therefore reasonable to assume that all software artifacts have errors. Even if a particular artifact does not contain errors, it is extremely difficult to prove its correctness. Faced with these realities, how can a responsible software engineer release software that is likely to fail sometime in the future? Other engineers confront the same problem, because all engineering artifacts eventually fail.

Whereas most engineering artifacts fail because physical objects wear out, however, software artifacts are most likely to fail because of faults designed into the original artifact. The intrinsically faulty nature of software distinguishes it from light bulbs and I-beams, for example, whose failures are easier to predict statistically.

To acknowledge responsibilities for the failure of software artifacts, software developers should exercise due diligence in creating software, and they should be as candid as possible about both known and unknown faults in the software—particularly software for safety-critical systems , in which a failure can threaten the lives of people. Candor by software developers would give software consumers a better chance to make reasonable decisions about software before they buy it (7). Following an established tradition in medicine, Miller (8) advocates “software informed consent” as a way to formalize an ethical principle that requires openness from software developers. Software informed consent requires software developers to reveal, using explanations that are understandable to their customers, the risks of their software, including the likelihoods of known faults and the probabilities that undiscovered faults still exist.

The idea of software informed consent motivates candor, and also requires continuing research into methods of discovering software faults and measuring risk.

Responsibilities to Employers

Most computing professionals work for employers. The employment relationship is contractual: the professional promises to work for the employer in return for a salary and benefits. Professionals often have access to the employer’s proprietary information such as trade secrets, and the professional must keep this information confidential. Besides trade secrets, the professional must also honor other forms of intellectual property owned by the employer: the professional does not have the right to profit from independent sale or use of this intellectual property, including software developed with the employer’s resources.

Every employee is expected to work loyally on behalf of the employer. In particular, professionals should be aware of potential conflicts of interest, in which loyalty might be owed to other parties besides the employer. A conflict of interest arises when a professional is asked to render a judgment, but the professional has personal or financial interests that may interfere with the exercise of that judgment. For instance, a computing professional may be responsible for ordering computing equipment, and an equipment vendor owned by the professional’s spouse might submit a bid. In this case, others would perceive that the marriage relationship might bias the professional’s judgment. Even if the spouse’s equipment would be the best choice, the professional’s judgment would not be trustworthy. In a typical conflict of interest situation, the professional should recuse herself: that is, the professional should remove herself and ask another qualified person to make the decision.

Many computing professionals have managerial duties, and some are solely managers. Managerial roles complicate the responsibilities of computing professionals because managers have administrative responsibilities and interests within their organizations, in addition to their professional responsibilities to clients and the public.

Responsibilities to Other Professionals

While everyone deserves respect from everyone else, when professionals interact with each other, they should demonstrate a kind of respect called collegiality . For example, when one professional uses the ideas of a second professional, the first should credit the second. In a research article, an author gives credit by properly citing the sources of ideas due to other authors in previously published articles. Using these ideas without attribution constitutes plagiarism. Academics consider plagiarism unethical because it represents the theft of ideas and the misrepresentation of those ideas as the plagiarist’s own.

Because clients cannot adequately evaluate the quality of professional service, individual professionals know that their work must be evaluated by other members of the same profession. This evaluation, called peer review , occurs in both practice and research. Research in computing is presented at conferences and published in scholarly journals. Before a manuscript that reports a research project can be accepted for a conference or published in a journal, the manuscript must be reviewed by peer researchers who are experts in the subject of the manuscript.

Because computing professionals work together, they must observe professional standards . These standards of practice are created by members of the profession, or within organizations. For example, in software development, one standard of practice is a convention for names of variables in code. By following coding standards, a software developer can facilitate the work of a software maintainer who subsequently modifies the code. For many important issues for which standards would be theoretically appropriate, however, “standards” in software engineering are controversial, informal, or non-existent. An example of this problem is the difficulties encountered when the IEEE and the ACM attempted to standardize a body of knowledge for software engineering, to enable the licensing of software engineers.

Senior professionals have an obligation to mentor junior professionals in the same field. Although professionals are highly educated, junior members of a profession require further learning and experience to develop professional judgment. This learning is best accomplished under the tutelage of a senior professional. In engineering, to earn a P.E. license, a junior engineer must work under the supervision of a licensed engineer for at least four years. More generally, professionals should assist each other in continuing education and professional development, which are generally required for maintaining licensure.

Professionals can fulfill their obligations to contribute to the profession by volunteering . The peer review of research publications depends heavily on volunteer reviewers and editors, and the activities of professional associations are conducted by committees of volunteers.

Responsibilities to the Public

According to engineering codes of ethics, the engineer’s most important obligation is to ensure the safety, health, and welfare of the public. Although everyone must avoid endangering others, engineers have a special obligation to ensure the safety of the objects that they produce. Computing professionals share this special obligation to guarantee the safety of the public, and to improve the quality of life of those who use computers and information systems.

As part of this obligation, computing professionals should enhance the public’s understanding of computing. The responsibility to educate the public is a collective responsibility of the computing profession as a whole; individual professionals might fulfill this responsibility in their own ways. Examples of such public service to include advising a church on the purchase of computing equipment, and writing a letter to the editor of a newspaper about technical issues related to proposed legislation to regulate the Internet.

It is particularly important for computing professionals to contribute their technical knowledge to discussions about public policies regarding computing. Many communities are considering controversial measures such as the installation of Web filtering software on public access computers in libraries. Computing professionals can participate in communities’ decisions by providing technical facts. Technological controversies involving the social impacts of computers are covered in a separate article of this encyclopedia.

When a technical professional’s obligation of loyalty to the employer conflicts with the obligation to ensure the safety of the public, the professional may consider whistle-blowing , that is, alerting people outside the employer’s organization to a serious, imminent threat to public safety. Computer engineers blew the whistle during the development of the Bay Area Rapid Transit (BART) system near San Francisco (9). In the early 1970s, three BART engineers became alarmed by deficiencies in the design of the electronics and software for the automatic train control system, deficiencies that could have endangered passengers on BART trains. The engineers raised their concerns within the BART organization without success. Finally, they contacted a member of the BART board of directors, who passed their concerns to Bay Area newspapers. The three engineers were immediately fired for disloyalty. They were never reinstated, even when an accident proved their concerns were valid. When the engineers sued the BART managers, the IEEE filed an amicus curiae brief on the engineers’ behalf, stating that engineering codes of ethics required the three engineers to act to protect the safety of the public. The next section describes codes of ethics for computing professionals.

5. Codes of Ethics

For each profession, the professional’s obligations to clients, employers, other professionals, and the public are stated explicitly in the profession’s code of ethics or code of professional conduct. For computing professionals, such codes have been developed by, the Association for Computing Machinery (ACM), the British Computer Society (BCS), the Computer Society of the Institute of Electrical and Electronics Engineers (IEEE-CS), the Association of Information Technology Professionals (AITP), the Hong Kong Computer Society, the Systems Administrators Special Interest Group of USENIX (SAGE), and other associations. Two of these codes will be described briefly here: the ACM code and the Software Engineering Code jointly approved by the IEEE-CS and the ACM.

ACM is one of the the largest nonprofit scientific and educational organizations devoted to computing. In 1966 and 1972, the ACM published codes of ethics for computing professionals. In 1992, the ACM adopted the current Code of Ethics and Professional Conduct (10), which appears in Appendix #1. Each statement of the code is accompanied by interpretive guidelines. For example, the guideline for statement 1.8, Honor confidentiality , indicates that other ethical imperatives such as complying with a law may take precedence. Unlike ethics codes for other professions, one section of the ACM code states the ethical obligations of “organizational leaders,” who are typically technical managers.

The ACM collaborated with IEEE-CS to produce the Software Engineering Code of Ethics and Professional Practice (11). Like the ACM code, the Software Engineering Code also includes the obligations of technical managers. This code is notable in part because it was the first code to focus exclusively on software engineers, not other computing professionals. This code is broken into a short version and a long version. The short version comprises a preamble and eight short principles; this version appears in Appendix #2. The long version expands on the eight principles with multiple clauses that apply the principles to specific issues and situations.

Any code of ethics is necessarily incomplete—no document can address every possible situation. In addition, a code must be written in general language; each statement in a code requires interpretation to be applied in specific circumstances. Nevertheless, a code of ethics can serve multiple purposes (12, 13). A code can inspire members of a profession to strive for the profession’s ideals. A code can educate new members about their professional obligations, and tell nonmembers what they may expect members to do. A code can set standards of conduct for professionals and provide a basis for expelling members who violate these standards. Finally, a code may support individuals in making difficult decisions. For example, because all engineering codes of ethics prioritize the safety and welfare of the public, an engineer can object to unsafe practices not merely as a matter of individual conscience, but with the full support of the consensus of the profession. The application of a code of ethics for making decisions is highlighted in the next section.

6. Ethical Decision-Making for Computing Professionals

Every user of e-mail has received unsolicited bulk commercial e-mail messages, known in a general way as spam . (A precise definition of “spam” has proven elusive and is controversial; most people know spam when they see it, but legally and ethically a universally accepted definition has not yet emerged.) A single spam broadcast can initiate millions of messages. Senders of spam claim that they are exercising their free speech rights, and few laws have been attempted to restrict it. In the United States, no federal law prohibited spamming before the CAN-SPAM Act of 2003. Even now, the CAN-SPAM law does not apply to spam messages that originate in other countries. Although some prosecutions have occurred using the CAN-SPAM Act, most people still receive many e-mail messages that they consider spam.

Some spam messages may be deceptive—they may appear genuine—but others are completely accurate. Although most spamming is not illegal, even honest spamming is considered unethical by many people, for the following reasons. First, spamming has bad consequences: it wastes the time of recipients who must delete junk e-mail messages, and these messages waste space on computers; in addition, spamming reduces users’ trust in e-mail.

Second, spamming is not reversible: senders of spam do not want to receive spam. Third, spamming could not be allowed as a general practice: if everyone attempted to broadcast spam messages to wide audiences, computer networks would become clogged with unwanted e-mail messages, and no one would be able to communicate at all.

The three reasons advanced against spam correspond to three ways in which the morality of an action can be evaluated: first, whether on balance the action results in more good consequences than bad consequences; second, whether the actor would be willing to trade places with someone affected by the action; third, whether everyone (in a similar situation) could choose the same action as a general rule. These three kinds of moral reasons correspond to three traditions in philosophical ethics: consequentialism, Golden Rule, and duty-based ethics.

Ethical issues in the use of computers can also be evaluated through the use of analogies to more familiar situations. For example, a hacker may try to justify gaining unauthorized access to unsecured data by reasoning that because the data are not protected, anyone should be able to read it. But by analogy, someone who finds the front door of a house unlocked is not justified in entering the house and snooping around. Entering an unlocked house is trespassing, and trespassing violates the privacy of the house’s occupants.

When making ethical decisions, computing professionals can rely not only on general moral reasoning but also on specific guidance from codes of ethics, such as the ACM Code of Ethics (10). Here is a fictional example of that approach.

XYZ Corporation plans to secretly monitor the Web pages visited by its employees, using a data mining program to analyze the access records. Chris, an engineer at XYZ, recommends that XYZ purchase a data mining program from Robin, an independent contractor, without mentioning that Robin is Chris’s domestic partner. Robin had developed this program while previously employed at UVW Corporation, without awareness of anyone at UVW.

First, the monitoring of Web accesses intrudes on employees’ privacy; it is analogous to eavesdropping on telephone calls. Professionals should respect the privacy of individuals (ACM Code 1.7, Respect the privacy of others, and 3.5, Articulate and support policies that protect the dignity of users and others affected by a computing system). Second, Chris has a conflict of interest because the sale would benefit Chris’s domestic partner. By failing to mention this relationship, Chris was disingenuous (ACM Code 1.3, Be honest and trustworthy). Third, because Robin developed the program while working at UVW, some and perhaps all of the property rights belong to UVW. Robin probably signed an agreement that software developed while employed at UVW belongs to UVW. Professionals should honor property rights and 11 contacts (ACM Code 1.5, Honor property rights including copyrights and patent, and 2.6, Honor contracts, agreements, and assigned responsibilities).

Applying a code of ethics might not yield a clear solution of an ethical problem because different principles in a code might conflict. For instance, the principles of honesty and confidentiality conflict when a professional who is questioned about the technical details of the employer’s forthcoming product must choose between answering the question completely and keeping the information secret. Consequently, more sophisticated methods have been developed for solving ethical problems. Maner (14) has studied and collected what he calls “procedural ethics, step-by-step ethical reasoning procedures … that may prove useful to computing professionals engaged in ethical decision-making.” Maner’s list includes a method specialized for business ethics (15), a paramedic method (16), and a procedure from the U.S. Department of Defense (17). These procedures appeal to the problem-solving ethos of engineering, and they help professionals avoid specific traps that might otherwise impair a professional’s ethical judgment. No procedural ethics method should be interpreted as allowing complete objectivity or providing a mechanical algorithm for reaching a conclusion about an ethical problem, however, because all professional ethics issues of any complexity require subtle and subjective judgments.

7. Computing and the Study of Ethics: The Ethical Challenges of Artificial Intelligence and Autonomous Agents

Many ethical issues, such as conflict of interest, are common to different professions. In computing and engineering, however, unique ethical issues arise from the creation of machines whose outward behaviors resemble human behaviors that we consider “intelligent.” As machines become more versatile and sophisticated, and as they increasingly take on tasks that were once assigned only to humans, computing professionals and engineers must rethink their relationship to the artifacts they design, develop, and then deploy.

For many years, ethical challenges have been part of discussions of artificial intelligence. Indeed, two classic references in the field are by Norbert Wiener in 1965 (18) and by Joseph Weizenbaum in 1976 (19). Since the 1990s, the emergence of sophisticated “autonomous agents,” including Web “bots” and physical robots, has intensified the ethical debate. Two fundamental issues are of immediate concern: the responsibility of computing professionals who create these sophisticated machines, and the notion that the machines themselves will, if they have not already done so, become sufficiently sophisticated so that they will be considered themselves moral agents, capable of ethical praise or blame independent of the engineers and scientists who developed them. This area of ethics is controversial and actively researched. A full discussion of even some of the nuances is beyond the scope of this article. Recent essays by Floridi and Sanders (20), and Himma (21) are two examples of recent influential ideas in the area.

  • A. Goldman. The Moral Foundation of Professional Ethics . Rowman & Littlefield: New Jersey, 1980.
  • J. White and B. Simons, ACM’s position on the licensing of software engineers, Communications of the ACM, vol. 45, no. 11, p. 91, Nov. 2002.
  • N. G. Leveson and C. S. Turner, An investigation of the Therac-25 accidents, Computer , vol. 26, no. 7, pp. 18–41, July 1993.
  • J. Ladd, Collective and individual moral responsibility in engineering: some questions, IEEE Technology and Society Magazine , vol. 1, no. 2, pp. 3–10, June 1982.
  • H. Nissenbaum, Computing and accountability, Communications of the ACM , vol. 37, no. 1, pp. 73–80, Jan. 1994.
  • B. Friedman and H. Nissenbaum, Bias in computer systems, ACM Transactions on Information Systems , vol. 14, no. 3, pp. 330–347, July 1996.
  • C. Kaner. Blog: Software customer bill of rights. (August 27, 2003). Retrieved June 23, 2007 from http://www.satisfice.com/kaner/.
  • K. Miller, Software informed consent: docete emptorem, not caveat emptor, Science and Engineering Ethics , vol. 4, no. 3, pp. 357–362, July 1998.
  • G. D. Friedlander, The case of the three engineers vs. BART, IEEE Spectrum , vol. 11, no. 10, pp. 69–76, Oct. 1974.
  • R. Anderson, D. G. Johnson, D. Gotterbarn, and J. Perrolle, Using the new ACM code of ethics in decision making, Communications of the ACM , vol. 36, no. 2, pp. 98–107, Feb. 1993.
  • D. Gotterbarn, K. Miller, and S. Rogerson, Software engineering code of ethics is approved,  Communications of the ACM , vol. 42, no. 10, pp. 102–107, Oct. 1999.
  • M. Davis, Thinking like an engineer: the place of a code of ethics in the practice of a profession, Philosophy and Public Affairs , vol. 20, no. 2, pp. 150–167, Spring 1991.
  • D. Gotterbarn, Computing professionals and your responsibilities: virtual information and the software engineering code of ethics, Chapter 9, pp. 200–219, In Internet Ethics , ed. D. Langford, St. Martin’s Press, New York, 2000.
  • W. Maner, Heuristic methods for computer ethics. Metaphilosophy , vol. 33, no. 3, pp. 339– 365, 2002.
  • Mathison, D. L. Teaching an ethical decision model that skips the philosophers and works for real business students. Proceedings . New Orleans: National Academy of Management, 1987, pp. 1–9.
  • W. R. Collins and K. Miller, A paramedic method for computing professionals, Journal of Systems and Software , vol. 17, no. 1, pp. 47–84, Jan. 1992.
  • USDoD (1999). United States Department of Defense. Joint ethics regulation DoD 5500.7- R." Retrieved June 26, 2007 from http://www.defenselink.mil/dodgc/defense_ethics/ethics_regulation/jer1-4.doc
  • N. Wiener (1965) Cybernetics: or the Control and Communication in the Animal and the Machine. MIT Press: Cambridge: MA.
  • J. Weizenbaum (1976) Computer Power and Human Reason: From Judgment to Calculation . WH Freeman & Co.: New York, NY.
  • L. Floridi and J. Sanders (2004). On the morality of artificial agents. Minds and Machines. vol. 14, no. 3, pp. 349-379, Aug. 2004.
  • K. Himma (2004). There’s something about Mary: The moral value of things qua information objects . Ethics and Information Technology , vol. 6, no. 3, 145-159, Sep. 2004.

Reading List

  • D. G. Johnson, Professional ethics, Chapter 3, pp. 54–80, In Computer Ethics , 3 rd ed., Prentice Hall, Upper Saddle River, N. J., 2001.
  • M. J. Quinn, Professional ethics, Chapter 9, pp. 365–403, In Ethics for the Information Age , Pearson / Addison-Wesley, Boston, 2005.
  • H. Tavani, Professional ethics, codes of conduct, and moral responsibility, Chapter 4, pp. 87– 116, In Ethics and Technology: Ethical Issues in an Age of Information and Communication Technology , Wiley, New York, 2004.

Appendix 1: ACM Code of Ethics and Professional Conduct

http://www.acm.org/about/code-of-ethics

Appendix 2: Software Engineering Code of Ethics and Professional Practice (short version)

http://www.acm.org/about/se-code/

Related Resources

Submit Content to the OEC   Donate

NSF logo

This material is based upon work supported by the National Science Foundation under Award No. 2055332. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Computer Ethics: Some Case Study

This document discusses four case studies related to computer ethics: 1) copying software, 2) data mining customer information, 3) freedom of expression online, and 4) gender representation in computer games. Each case study presents an ethical dilemma, outlines relevant details, and poses discussion questions about professionals' responsibilities and how to balance various stakeholder interests. Read less

computer ethics case study example

Recommended

More related content, what's hot, what's hot ( 20 ), similar to computer ethics: some case study, similar to computer ethics: some case study ( 20 ), more from varun kumar, more from varun kumar ( 20 ), recently uploaded, recently uploaded ( 20 ).

  • 1. Computer Ethics: Some Case Study Dr. Varun Kumar Dr. Varun Kumar (IIIT Surat) Lecture-3 1 / 8
  • 2. Outlines 1 Computer Ethics: Case Study Should I Copy Software ? Should a Company Data Mine ? Freedom of Expression Professional Responsibility Dr. Varun Kumar (IIIT Surat) Lecture-3 2 / 8
  • 3. Computer Ethics: Scenario A Should I copy software ? ⇒ Ramesh invests a small amounts on the stock market. ⇒ Last year he bought and successfully employed a software package to help him with his investments. ⇒ Recently, he met Suresh who was also interested in using the software. ⇒ Suresh borrowed the package, copied it and then returns it. Both vaguely knew that the software was proprietary but did not read up the details. Q Did Ramesh and Suresh do anything wrong, if so what? Ans Something to consider: Should software package be lent? When is it justifiable to break the law? Bad law, inappropriate law or if the law is easy to break? Dr. Varun Kumar (IIIT Surat) Lecture-3 3 / 8
  • 4. B Should a company data mine ? ⇒ Data mining is a process of exploration and analysis of large quantities of data, by automatic or semi-automatic means. ⇒ This is done in order to discover meaningful patterns and rules. ⇒ In many cases, the data was not collected primarily for the purpose of Data Mining. Case study: ⇒ Mr. Vijay sells hardware and software to over 100 000 customers per year. ⇒ He has 10 years experience. ⇒ As part of the billing process he keeps information on customers. ⇒ He buys and uses a data mining tool to derive useful information about her clients information such as zip codes, credit card numbers, ID numbers etc. Dr. Varun Kumar (IIIT Surat) Lecture-3 4 / 8
  • 5. Continued- ⇒ Most of this information identifies groups and not individuals. ⇒ He can then use the information to market his wares more efficiently. Q Is this ethical since customers did not give her the information for this purpose? Something to consider: Should customer be notified? Is there a need for establishment of a policy? What should this policy looks like? Professional responsibility (professional Ethics): Do professionals have a responsibility to ensure computing serves humanity well? Dr. Varun Kumar (IIIT Surat) Lecture-3 5 / 8
  • 6. C Freedom of expression ⇒ In the US, a student JB posted sex fantasies on the Internet called Pamela’s ordeal. ⇒ The story was fictional, but JB named the main character, Pamela , after a REAL student. ⇒ In it, he described the rape, torture and murder of Pamela. ⇒ He also exchanged e-mails with other people in the newsgroups, discussing sex acts. ⇒ An alumnus saw this and reported it to the University. ⇒ JB was then arrested and held in custody. ⇒ He was charged with transmitting interstate communication of threat to injure another person. ⇒ The charges were eventually dropped, but did JB really do anything wrong? Dr. Varun Kumar (IIIT Surat) Lecture-3 6 / 8
  • 7. D Professional responsibility Something to consider: Should self-censorship be enforced. Who decides what is acceptable? Is there a need for a public policy? Professional responsibility ⇒ Mike works for a Software development company which develops computer games for children aged 8-14. ⇒ The latest game that Mike worked on, uses inferential reasoning and allows players to choose different characters, primarily macho man and sexy woman. ⇒ The game is used mainly by boys. ⇒ Recently Mike attended a conference on gender and minorities, where he described the above. Dr. Varun Kumar (IIIT Surat) Lecture-3 7 / 8
  • 8. Continued– ⇒ The conference delegates discussed the issue of lower participation of women in computing and how to make the industry more attractive to women. ⇒ Back at work, Mike realised that his production team is all male. Should he refuse to work on this team? ⇒ Should he ask for the team to be reviewed? ⇒ Will the game sell as well if different message was given? ⇒ What is his responsibility? Something to consider: Should software package be lent? When is it justifiable to break the law? Bad law, inappropriate law or if the law is easy to break? Dr. Varun Kumar (IIIT Surat) Lecture-3 8 / 8
  • Dean’s Office
  • External Advisory Council
  • Computing Council
  • Extended Computing Council
  • Undergraduate Advisory Group
  • Break Through Tech AI
  • Building 45 Event Space
  • Infinite Mile Awards: Past Winners
  • Frequently Asked Questions
  • Undergraduate Programs
  • Graduate Programs
  • Educating Computing Bilinguals
  • Online Learning
  • Industry Programs
  • AI Policy Briefs
  • Envisioning the Future of Computing Prize 2024
  • SERC Symposium 2023
  • SERC Case Studies
  • SERC Scholars Program
  • SERC Postdocs
  • Common Ground Subjects
  • For First-Year Students and Advisors
  • For Instructors: About Common Ground Subjects
  • Common Ground Award for Excellence in Teaching
  • New & Incoming Faculty
  • Faculty Resources
  • Faculty Openings
  • Search for: Search
  • MIT Homepage

Social and Ethical Responsibilities of Computing

MIT students in class

The Social and Ethical Responsibilities of Computing (SERC) is facilitating the development of responsible “habits of mind and action” for those who create and deploy computing technologies and fostering the creation of technologies in the public interest. Through a teaching, research and engagement framework , SERC is working to train students, encourage research to assess the broad challenges and opportunities associated with computing, and improve design, policy, implementation, and impacts. SERC is led by associate deans Caspar Hare , professor of philosophy, and Nikos Trichakis , associate professor of operations management.

SERC Framework

  • Coordinated curriculum: Develop original pedagogical materials that can be incorporated into existing classes, across all levels of instruction. We have a broad mission to incorporate insights and perspectives from a range of disciplines and fields of study. New materials are developed by multidisciplinary teams with members from across computing, data sciences, humanities, arts, and social sciences — for use in each of these types of classes. We also help to support new courses, such as 24.133 (Experiential Ethics) .
  • Case studies: We commission and publish a series of peer-reviewed MIT Case Studies in Social and Ethical Responsibilities of Computing . The cases are brief, based on original research, and appropriate for use in undergraduate instruction across a range of existing courses and fields of study. Cases are made available for free via open-access publishing.
  • Active learning projects: Original homework assignments and in-class demonstrations specially created by interdisciplinary teams, to enable instructors to embed SERC-related material into a wide variety of existing classes. These original materials are available for free via open-access publishing at  MIT OpenCourseWare . 
  • Research community: Our SERC Scholars program — open to MIT undergraduates, graduate students, and postdocs from across the Institute — provides a range of ways for students and postdocs to deepen their engagement with SERC while growing a broader community on campus.
  • Research catalyst: We help to connect researchers from across MIT who share research interests in various aspects of social and ethical responsibilities of computing. We focus on connecting researchers from different disciplines, whose projects can benefit from combining methods, insights, and approaches from distinct fields of study.
  • Research infrastructure: Building on recommendations from our SERC Action Group on Computing, Data, and Anti-racism , and incorporating insights and experience from the Legal, Ethical, and Equity Committee for MIT Campus Planning , we are developing resources and guidance for responsible computing, with special emphasis on projects that involve human-sourced data and information.

Broader Engagements

  • Policy task forces: We convene task forces that bring together leading academic researchers, industry practitioners, and policymakers to address key questions that will shape the future. The focus is on developing concrete recommendations for technologically informed regulatory frameworks and policy-aware technological development, to guide safe, equitable, and innovative advances in various fields. Some task forces are coordinated with the new AI Policy Forum series. We coordinate with various programs, including the Technology and Policy Program , Internet Policy Research Initiative , and MIT Governance Lab .
  • Public forums : SERC sponsors public forums and accessible online materials related to computing, data, and society, drawing on insights from scholars, practitioners, and civic groups.

SERC Boards and Action Groups

Drawing on the expertise of colleagues from a wide range of fields, SERC is building a community of advisors and participants from across MIT, intersecting all 5 schools and 19 different departments, labs, and centers.

computer ethics case study example

SERC Fireside Chat: Mira Murati of OpenAI

computer ethics case study example

  • Internet Ethics Cases
  • Markkula Center for Applied Ethics
  • Focus Areas
  • Internet Ethics

Find ethics case studies on topics in Internet ethics including privacy, hacking, social media, the right to be forgotten, and hashtag activism. (For permission to reprint articles, submit requests to [email protected] .)

AI-generated text, voices, and images used for entertainment productions and impersonation raise ethical questions.

Ethical questions arise in interactions among students, instructors, administrators, and providers of AI tools.

What can we learn from the Tay experience, about AI and social media ethics more broadly?

Who should be consulted before using emotion-recognition AI to report on constituents’ sentiments?

When 'algorithm alchemy' wrongly accuses people of fraud, who is accountable?

Which stakeholders might benefit from a new age of VR “travel”? Which stakeholders might be harmed?

Ethical questions about data collection, data-sharing, access, use, and privacy.

As PunkSpider is pending re-release, ethical issues are considered about a tool that is able to spot and share vulnerabilities on the web, opening those results to the public.

With URVR recipients can capture and share 360 3D moments and live them out together.

VR rage rooms may provide therapeutic and inexpensive benefits while also raising ethical questions.

  • More pages:

McCombs School of Business

  • Español ( Spanish )

Videos Concepts Unwrapped View All 36 short illustrated videos explain behavioral ethics concepts and basic ethics principles. Concepts Unwrapped: Sports Edition View All 10 short videos introduce athletes to behavioral ethics concepts. Ethics Defined (Glossary) View All 58 animated videos - 1 to 2 minutes each - define key ethics terms and concepts. Ethics in Focus View All One-of-a-kind videos highlight the ethical aspects of current and historical subjects. Giving Voice To Values View All Eight short videos present the 7 principles of values-driven leadership from Gentile's Giving Voice to Values. In It To Win View All A documentary and six short videos reveal the behavioral ethics biases in super-lobbyist Jack Abramoff's story. Scandals Illustrated View All 30 videos - one minute each - introduce newsworthy scandals with ethical insights and case studies. Video Series

Case Studies UT Star Icon

Case Studies

More than 70 cases pair ethics concepts with real world situations. From journalism, performing arts, and scientific research to sports, law, and business, these case studies explore current and historic ethical dilemmas, their motivating biases, and their consequences. Each case includes discussion questions, related videos, and a bibliography.

A Million Little Pieces

A Million Little Pieces

James Frey’s popular memoir stirred controversy and media attention after it was revealed to contain numerous exaggerations and fabrications.

Abramoff: Lobbying Congress

Abramoff: Lobbying Congress

Super-lobbyist Abramoff was caught in a scheme to lobby against his own clients. Was a corrupt individual or a corrupt system – or both – to blame?

Apple Suppliers & Labor Practices

Apple Suppliers & Labor Practices

Is tech company Apple, Inc. ethically obligated to oversee the questionable working conditions of other companies further down their supply chain?

Approaching the Presidency: Roosevelt & Taft

Approaching the Presidency: Roosevelt & Taft

Some presidents view their responsibilities in strictly legal terms, others according to duty. Roosevelt and Taft took two extreme approaches.

Appropriating “Hope”

Appropriating “Hope”

Fairey’s portrait of Barack Obama raised debate over the extent to which an artist can use and modify another’s artistic work, yet still call it one’s own.

Arctic Offshore Drilling

Arctic Offshore Drilling

Competing groups frame the debate over oil drilling off Alaska’s coast in varying ways depending on their environmental and economic interests.

Banning Burkas: Freedom or Discrimination?

Banning Burkas: Freedom or Discrimination?

The French law banning women from wearing burkas in public sparked debate about discrimination and freedom of religion.

Birthing Vaccine Skepticism

Birthing Vaccine Skepticism

Wakefield published an article riddled with inaccuracies and conflicts of interest that created significant vaccine hesitancy regarding the MMR vaccine.

Blurred Lines of Copyright

Blurred Lines of Copyright

Marvin Gaye’s Estate won a lawsuit against Robin Thicke and Pharrell Williams for the hit song “Blurred Lines,” which had a similar feel to one of his songs.

Bullfighting: Art or Not?

Bullfighting: Art or Not?

Bullfighting has been a prominent cultural and artistic event for centuries, but in recent decades it has faced increasing criticism for animal rights’ abuse.

Buying Green: Consumer Behavior

Buying Green: Consumer Behavior

Do purchasing green products, such as organic foods and electric cars, give consumers the moral license to indulge in unethical behavior?

Cadavers in Car Safety Research

Cadavers in Car Safety Research

Engineers at Heidelberg University insist that the use of human cadavers in car safety research is ethical because their research can save lives.

Cardinals’ Computer Hacking

Cardinals’ Computer Hacking

St. Louis Cardinals scouting director Chris Correa hacked into the Houston Astros’ webmail system, leading to legal repercussions and a lifetime ban from MLB.

Cheating: Atlanta’s School Scandal

Cheating: Atlanta’s School Scandal

Teachers and administrators at Parks Middle School adjust struggling students’ test scores in an effort to save their school from closure.

Cheating: Sign-Stealing in MLB

Cheating: Sign-Stealing in MLB

The Houston Astros’ sign-stealing scheme rocked the baseball world, leading to a game-changing MLB investigation and fallout.

Cheating: UNC’s Academic Fraud

Cheating: UNC’s Academic Fraud

UNC’s academic fraud scandal uncovered an 18-year scheme of unchecked coursework and fraudulent classes that enabled student-athletes to play sports.

Cheney v. U.S. District Court

Cheney v. U.S. District Court

A controversial case focuses on Justice Scalia’s personal friendship with Vice President Cheney and the possible conflict of interest it poses to the case.

Christina Fallin: “Appropriate Culturation?”

Christina Fallin: “Appropriate Culturation?”

After Fallin posted a picture of herself wearing a Plain’s headdress on social media, uproar emerged over cultural appropriation and Fallin’s intentions.

Climate Change & the Paris Deal

Climate Change & the Paris Deal

While climate change poses many abstract problems, the actions (or inactions) of today’s populations will have tangible effects on future generations.

Cover-Up on Campus

Cover-Up on Campus

While the Baylor University football team was winning on the field, university officials failed to take action when allegations of sexual assault by student athletes emerged.

Covering Female Athletes

Covering Female Athletes

Sports Illustrated stirs controversy when their cover photo of an Olympic skier seems to focus more on her physical appearance than her athletic abilities.

Covering Yourself? Journalists and the Bowl Championship

Covering Yourself? Journalists and the Bowl Championship

Can news outlets covering the Bowl Championship Series fairly report sports news if their own polls were used to create the news?

Cyber Harassment

Cyber Harassment

After a student defames a middle school teacher on social media, the teacher confronts the student in class and posts a video of the confrontation online.

Defending Freedom of Tweets?

Defending Freedom of Tweets?

Running back Rashard Mendenhall receives backlash from fans after criticizing the celebration of the assassination of Osama Bin Laden in a tweet.

Dennis Kozlowski: Living Large

Dennis Kozlowski: Living Large

Dennis Kozlowski was an effective leader for Tyco in his first few years as CEO, but eventually faced criminal charges over his use of company assets.

Digital Downloads

Digital Downloads

File-sharing program Napster sparked debate over the legal and ethical dimensions of downloading unauthorized copies of copyrighted music.

Dr. V’s Magical Putter

Dr. V’s Magical Putter

Journalist Caleb Hannan outed Dr. V as a trans woman, sparking debate over the ethics of Hannan’s reporting, as well its role in Dr. V’s suicide.

East Germany’s Doping Machine

East Germany’s Doping Machine

From 1968 to the late 1980s, East Germany (GDR) doped some 9,000 athletes to gain success in international athletic competitions despite being aware of the unfortunate side effects.

Ebola & American Intervention

Ebola & American Intervention

Did the dispatch of U.S. military units to Liberia to aid in humanitarian relief during the Ebola epidemic help or hinder the process?

Edward Snowden: Traitor or Hero?

Edward Snowden: Traitor or Hero?

Was Edward Snowden’s release of confidential government documents ethically justifiable?

Ethical Pitfalls in Action

Ethical Pitfalls in Action

Why do good people do bad things? Behavioral ethics is the science of moral decision-making, which explores why and how people make the ethical (and unethical) decisions that they do.

Ethical Use of Home DNA Testing

Ethical Use of Home DNA Testing

The rising popularity of at-home DNA testing kits raises questions about privacy and consumer rights.

Flying the Confederate Flag

Flying the Confederate Flag

A heated debate ensues over whether or not the Confederate flag should be removed from the South Carolina State House grounds.

Freedom of Speech on Campus

Freedom of Speech on Campus

In the wake of racially motivated offenses, student protests sparked debate over the roles of free speech, deliberation, and tolerance on campus.

Freedom vs. Duty in Clinical Social Work

Freedom vs. Duty in Clinical Social Work

What should social workers do when their personal values come in conflict with the clients they are meant to serve?

Full Disclosure: Manipulating Donors

Full Disclosure: Manipulating Donors

When an intern witnesses a donor making a large gift to a non-profit organization under misleading circumstances, she struggles with what to do.

Gaming the System: The VA Scandal

Gaming the System: The VA Scandal

The Veterans Administration’s incentives were meant to spur more efficient and productive healthcare, but not all administrators complied as intended.

German Police Battalion 101

German Police Battalion 101

During the Holocaust, ordinary Germans became willing killers even though they could have opted out from murdering their Jewish neighbors.

Head Injuries & American Football

Head Injuries & American Football

Many studies have linked traumatic brain injuries and related conditions to American football, creating controversy around the safety of the sport.

Head Injuries & the NFL

Head Injuries & the NFL

American football is a rough and dangerous game and its impact on the players’ brain health has sparked a hotly contested debate.

Healthcare Obligations: Personal vs. Institutional

Healthcare Obligations: Personal vs. Institutional

A medical doctor must make a difficult decision when informing patients of the effectiveness of flu shots while upholding institutional recommendations.

High Stakes Testing

High Stakes Testing

In the wake of the No Child Left Behind Act, parents, teachers, and school administrators take different positions on how to assess student achievement.

In-FUR-mercials: Advertising & Adoption

In-FUR-mercials: Advertising & Adoption

When the Lied Animal Shelter faces a spike in animal intake, an advertising agency uses its moral imagination to increase pet adoptions.

Krogh & the Watergate Scandal

Krogh & the Watergate Scandal

Egil Krogh was a young lawyer working for the Nixon Administration whose ethics faded from view when asked to play a part in the Watergate break-in.

Limbaugh on Drug Addiction

Limbaugh on Drug Addiction

Radio talk show host Rush Limbaugh argued that drug abuse was a choice, not a disease. He later became addicted to painkillers.

LochteGate

U.S. Olympic swimmer Ryan Lochte’s “over-exaggeration” of an incident at the 2016 Rio Olympics led to very real consequences.

Meet Me at Starbucks

Meet Me at Starbucks

Two black men were arrested after an employee called the police on them, prompting Starbucks to implement “racial-bias” training across all its stores.

Myanmar Amber

Myanmar Amber

Buying amber could potentially fund an ethnic civil war, but refraining allows collectors to acquire important specimens that could be used for research.

Negotiating Bankruptcy

Negotiating Bankruptcy

Bankruptcy lawyer Gellene successfully represented a mining company during a major reorganization, but failed to disclose potential conflicts of interest.

Pao & Gender Bias

Pao & Gender Bias

Ellen Pao stirred debate in the venture capital and tech industries when she filed a lawsuit against her employer on grounds of gender discrimination.

Pardoning Nixon

Pardoning Nixon

One month after Richard Nixon resigned from the presidency, Gerald Ford made the controversial decision to issue Nixon a full pardon.

Patient Autonomy & Informed Consent

Patient Autonomy & Informed Consent

Nursing staff and family members struggle with informed consent when taking care of a patient who has been deemed legally incompetent.

Prenatal Diagnosis & Parental Choice

Prenatal Diagnosis & Parental Choice

Debate has emerged over the ethics of prenatal diagnosis and reproductive freedom in instances where testing has revealed genetic abnormalities.

Reporting on Robin Williams

Reporting on Robin Williams

After Robin Williams took his own life, news media covered the story in great detail, leading many to argue that such reporting violated the family’s privacy.

Responding to Child Migration

Responding to Child Migration

An influx of children migrants posed logistical and ethical dilemmas for U.S. authorities while intensifying ongoing debate about immigration.

Retracting Research: The Case of Chandok v. Klessig

Retracting Research: The Case of Chandok v. Klessig

A researcher makes the difficult decision to retract a published, peer-reviewed article after the original research results cannot be reproduced.

Sacking Social Media in College Sports

Sacking Social Media in College Sports

In the wake of questionable social media use by college athletes, the head coach at University of South Carolina bans his players from using Twitter.

Selling Enron

Selling Enron

Following the deregulation of electricity markets in California, private energy company Enron profited greatly, but at a dire cost.

Snyder v. Phelps

Snyder v. Phelps

Freedom of speech was put on trial in a case involving the Westboro Baptist Church and their protesting at the funeral of U.S. Marine Matthew Snyder.

Something Fishy at the Paralympics

Something Fishy at the Paralympics

Rampant cheating has plagued the Paralympics over the years, compromising the credibility and sportsmanship of Paralympian athletes.

Sports Blogs: The Wild West of Sports Journalism?

Sports Blogs: The Wild West of Sports Journalism?

Deadspin pays an anonymous source for information related to NFL star Brett Favre, sparking debate over the ethics of “checkbook journalism.”

Stangl & the Holocaust

Stangl & the Holocaust

Franz Stangl was the most effective Nazi administrator in Poland, killing nearly one million Jews at Treblinka, but he claimed he was simply following orders.

Teaching Blackface: A Lesson on Stereotypes

Teaching Blackface: A Lesson on Stereotypes

A teacher was put on leave for showing a blackface video during a lesson on racial segregation, sparking discussion over how to teach about stereotypes.

The Astros’ Sign-Stealing Scandal

The Astros’ Sign-Stealing Scandal

The Houston Astros rode a wave of success, culminating in a World Series win, but it all came crashing down when their sign-stealing scheme was revealed.

The Central Park Five

The Central Park Five

Despite the indisputable and overwhelming evidence of the innocence of the Central Park Five, some involved in the case refuse to believe it.

The CIA Leak

The CIA Leak

Legal and political fallout follows from the leak of classified information that led to the identification of CIA agent Valerie Plame.

The Collapse of Barings Bank

The Collapse of Barings Bank

When faced with growing losses, investment banker Nick Leeson took big risks in an attempt to get out from under the losses. He lost.

The Costco Model

The Costco Model

How can companies promote positive treatment of employees and benefit from leading with the best practices? Costco offers a model.

The FBI & Apple Security vs. Privacy

The FBI & Apple Security vs. Privacy

How can tech companies and government organizations strike a balance between maintaining national security and protecting user privacy?

The Miss Saigon Controversy

The Miss Saigon Controversy

When a white actor was cast for the half-French, half-Vietnamese character in the Broadway production of Miss Saigon , debate ensued.

The Sandusky Scandal

The Sandusky Scandal

Following the conviction of assistant coach Jerry Sandusky for sexual abuse, debate continues on how much university officials and head coach Joe Paterno knew of the crimes.

The Varsity Blues Scandal

The Varsity Blues Scandal

A college admissions prep advisor told wealthy parents that while there were front doors into universities and back doors, he had created a side door that was worth exploring.

Therac-25

Providing radiation therapy to cancer patients, Therac-25 had malfunctions that resulted in 6 deaths. Who is accountable when technology causes harm?

Welfare Reform

Welfare Reform

The Welfare Reform Act changed how welfare operated, intensifying debate over the government’s role in supporting the poor through direct aid.

Wells Fargo and Moral Emotions

Wells Fargo and Moral Emotions

In a settlement with regulators, Wells Fargo Bank admitted that it had created as many as two million accounts for customers without their permission.

Stay Informed

Support our work.

Insider Risk Management

New Case Study: Unmanaged GTM Tags Become a Security Nightmare

Security Nightmare

Are your tags really safe with Google Tag Manager? If you've been thinking that using GTM means that your tracking tags and pixels are safely managed , then it might be time to think again. In this article we look at how a big-ticket seller that does business on every continent came unstuck when it forgot that you can't afford to allow tags to go unmanaged or become misconfigured.

Read the full case study here .

Google Tag Manager saves website owners time and money. Its visual interface lets them attach tracking tags to their sites and then modify them as needed without the need to call a developer every time. Such tags gather the marketing and analytics data that power growth, and GTM makes them easier to manage, but with strict rules around data privacy to consider, you can't trust it completely; it needs active oversight.

The ticket seller

A case in point that we recently became aware of involves a global company that sells tickets to live events. With global operations it's important to establish who has overall responsibility for a particular function, but in this case, that was lacking. In a culture where the lines of responsibility aren't clear, it isn't surprising that a marketing team outsourced something to an external company because it saw it as a security concern it could offload rather than a marketing issue.

Download the full case study here .

The task was the management of its Google Tag Manager usage. The team may have felt that marketing and growth were their priorities and so this move made sense, but security is one of those strands that runs through everything. The consequence of outsourcing this work was a data breach because the contractor didn't catch a misconfiguration.

GDPR, CCPA, the Cyber Resilience Act , and other privacy-related legislation require companies not to let this happen. They must protect their customers' data and obtain their explicit permission before collecting and sharing it, and because of the misconfiguration this didn't happen. Getting it wrong in this way can be very expensive both in terms of money and reputation, not to mention the fact that cybercriminals have used Google Tag Manager as a vessel for conducting web skimming and keylogging attacks. You can read more about the details of this story in our case study .

How big a problem is misconfiguration?

As we explored the case of the global ticketing company, we became curious about Google Tag Manager and wondered how widespread this kind of problem might be. We wondered how many other companies might be exposing themselves to potential multi-million-dollar class action lawsuits brought by masses of individuals whose data they have shared without permission or against local privacy regulations, and how many might be at risk of attracting big penalties from data privacy watchdogs and industry regulators?

The sample study

We decided to look at a sample of 4,000 websites that use Google Tag Manager. It turned out that they connect an average website to around five applications, and that 45% of these apps are used for advertising, 30% are pixels and 20% are analytics tools. Here are the apps that we found users connecting with Google Tag Manager the most, in order of popularity.

computer ethics case study example

For more information, read the full case study here .

We found that across all industries, Google Tag Manager and its connected apps account for 45% of all risk exposure among users. Overall, 20% of these apps are leaking personal or sensitive user data due to a misconfiguration.

Misconfigurations showed up in the applications below, which account for 85% of all cases:

computer ethics case study example

Oh, the irony!

Ironically, we found that Google Tag Manager itself is responsible for the most cases of misconfigurations that might leak user data and land the website owners who unquestioningly trust it in hot water.

Now, this is not an attack on Google Tag Manager, because it's a very useful and effective tool when handled safely. Our intention is to point out the dangers of not managing the potential risks that come with using it, and to encourage you to read all about the many practical ways of ensuring that your tags behave themselves.

Continuous protection

In considering tactics, techniques, and procedures in cyber, organizations must consider employing a continuous web threat management system, such as Reflectiz. Its digital tag management and security tools give your teams complete visibility and control over tags issuing alerts on any changes to tags (and in fact any code on the website) for review and approval. It satisfies the conflicting priorities of both marketing and security teams, allowing Security to do the gatekeeping without restricting the growth and innovation ambitions of Marketing. Read the full case study to find out more.

Browser Security

Continuous Attack Surface Discovery & Penetration Testing

Continuously discover, prioritize, & mitigate exposures with evidence-backed ASM, Pentesting, and Red Teaming.

New Attack Technique Exploits Microsoft Management Console Files

Cybersecurity Webinars

Secure your digital identity with these 5 must-have itdr features.

Facing identity threats? Discover how ITDR can save you from lateral movement and ransomware attacks.

Why Compromised Credentials Are the #1 Cyber Threat in 2024

From data breaches to identity theft, compromised credentials can cost you everything. Learn how to stop attackers in their tracks.

Cybersecurity

Democratization of Cyberattacks: How Billions of Unskilled Would-be Hackers Can Now Attack Your Organization

Expert Insights

Survey Reveals Compliance Professionals Seek Quality, Efficiency, Trust & Partnership

Expert Insights

Securing SaaS Apps in the Era of Generative AI

Expert Insights

Patching vs. Isolating Vulnerabilities

Get the latest news, expert insights, exclusive resources, and strategies from industry leaders – all for free.

How IBM helps Wimbledon use generative AI to drive personalised fan engagement

This collaboration with Wimbledon teams extends beyond the fan-facing digital platform, into enterprise-wide transformation.

Top 7 risks to your identity security posture

5 min read - Identity misconfigurations and blind spots stand out as critical concerns that undermine an organization’s identity security posture.

Intesa Sanpaolo and IBM secure digital transactions with fully homomorphic encryption

6 min read - Explore how European bank Intesa Sanpaolo and IBM partnered to deliver secure digital transactions using fully homomorphic encryption.

What is AI risk management?

8 min read - AI risk management is the process of identifying, mitigating and addressing the potential risks associated with AI technologies.

How IBM and AWS are partnering to deliver the promise of responsible AI

4 min read - This partnership between IBM and Amazon SageMaker is poised to play a pivotal role in shaping responsible AI practices across industries

June 26, 2024

A major upgrade to Db2® Warehouse on IBM Cloud®

June 25, 2024

Increase efficiency in asset lifecycle management with Maximo Application Suite’s new AI-power...

Achieving operational efficiency through Instana’s Intelligent Remediation

June 24, 2024

Manage the routing of your observability log and event data 

June 21, 2024

IBM unveils Cloud Pak for Data 5.0

Best practices for augmenting human intelligence with AI

2 min read - Enabling participation in the AI-driven economy to be underpinned by fairness, transparency, explainability, robustness and privacy. 

Microcontrollers vs. microprocessors: What’s the difference?

6 min read - Microcontroller units (MCUs) and microprocessor units (MPUs) are two kinds of integrated circuits that, while similar in certain ways, are very different in many others.

Mastering budget control in the age of AI: Leveraging on-premises and cloud XaaS for success 

2 min read - As organizations harness the power of AI while controlling costs, leveraging anything as a service (XaaS) models emerges as strategic.

Highlights by topic

Use IBM Watsonx’s AI or build your own machine learning models

Automate IT infrastructure management

Cloud-native software to secure resources and simplify compliance

Run code on real quantum systems using a full-stack SDK

Aggregate and analyze large datasets

Store, query and analyze structured data

Manage infrastructure, environments and deployments

Run workloads on hybrid cloud infrastructure

Responsible AI can revolutionize tax agencies to improve citizen services

Generative AI can revolutionize tax administration and drive toward a more personalized and ethical future.

Speed, scale and trustworthy AI on IBM Z with Machine Learning for IBM z/OS v3.2 

4 min read - Machine Learning for IBM® z/OS® is an AI platform made for IBM z/OS environments, combining data and transaction gravity with AI infusion.

The recipe for RAG: How cloud services enable generative AI outcomes across industries

4 min read - While the AI is the key component of the RAG framework, other “ingredients” such as PaaS solutions are integral to the mix

Rethink IT spend in the age of generative AI

3 min read - It's critical for organizations to consider frameworks like FinOps and TBM for visibility and accountability of all tech expenditure.

6 hard truths CEOs must confront in the generative AI era

5 min read - A call to action for CEOs to confront the realities of generative AI and to seize its potential for your organization.

Immutable backup strategies with cloud storage

4 min read - IBM Cloud Object Storage is a versatile and scalable solution that is crucial for storing and protecting data backups.

T-Mobile unlocks marketing efficiency with Adobe Workfront

4 min read - Discover how the T-Mobile team creates a more seamless work management system for its content supply chain with Adobe and IBM Consulting.

IBM Newsletters

IMAGES

  1. Computer Ethics

    computer ethics case study example

  2. Discussion Case 1

    computer ethics case study example

  3. Case Study of IT Ethics Perspectives

    computer ethics case study example

  4. (PDF) How to Do Computer Ethics—A Case Study: The Electronic Mall Bodensee

    computer ethics case study example

  5. Computer ethics and social issues case analysis: an environmental

    computer ethics case study example

  6. Aspects of Computer Ethics

    computer ethics case study example

VIDEO

  1. ETHICS CASE STUDIES-Ethical Dilemmas in Corporate HR Management|LECTURE-3|UPSC CSE MAINS|LevelUp IAS

  2. Ethics Case Study A Government Officers Dilemma

  3. Ethics case study Switt 6410775

  4. Computer Ethics || Introduction || Easily Explained (Nepali)|| Lecture -5

  5. Computer Ethics : Netiquette

  6. Examples of ethics| case study| Howells add

COMMENTS

  1. Code of Ethics Case Studies

    The ACM Code of Ethics and Professional Practice is meant to inform practice and education, and it is useful for individual decision-making. Computing professionals should approach issues that arise in everyday practice with a holistic reading of the principles of the ACM Code of Ethics and evaluate the situation with thoughtful consideration to the circumstances. In all cases, the computing ...

  2. Stanford Computer Ethics Case Studies and Interviews

    Their original use is for an undergraduate course on ethics, public policy, and technology, and they have been designed to prompt discussion about issues at the intersection of those fields. These works are licensed under a Creative Commons Attribution 4.0 International License. Case Studies. Algorithmic Decision-Making and Accountability

  3. Case Studies in Social and Ethical Responsibilities of Computing

    The MIT Case Studies in Social and Ethical Responsibilities of Computing (SERC) aims to advance new efforts within and beyond the Schwarzman College of Computing. The specially commissioned and peer-reviewed cases are brief and intended to be effective for undergraduate instruction across a range of classes and fields of study, and may also be ...

  4. Computer Engineering Cases

    Case studies on ethics for computer and software engineers. Open Source AI: To Release or Not To Release the GPT-2 Synthetic Text Generator. An AI Ethics Case Study. ... In this ethics case, a woman is displeased with her work role at a computer hardware company. May the Truth be with You.

  5. Technology Ethics Cases

    This template provides the basics for writing ethics case studies in technology (though with some modification it could be used in other fields as well). AI, Comedy, and Counterculture. An Ethics Case Study. AI-generated text, voices, and images used for entertainment productions and impersonation raise ethical questions.

  6. Case Materials

    Ethics in Computing Links. Contact Us. Case Materials. Clicking on the case below will take you to a table of contents of materials available for that case. Each contains a brief history of the case, teaching suggestions specific to the case, socio-technical and ethical analysis, two versions of exercises, and a variety of supporting documents.

  7. Fostering ethical thinking in computing

    A case studies series from the Social and Ethical Responsibilities of Computing program at MIT delves into a range of topics, from social and ethical implications of computing technologies and the racial disparities that can arise from deploying facial recognition technology in unregulated, real-world settings to the biases of risk prediction algorithms in the criminal justice system and the ...

  8. PDF Computer Ethics in The Undergraduate Curriculum: Case Studies and The

    Computer science faculty are encouraged to adapt these examples in their classrooms, and to develop new examples based on different case studies. The three cases here were adapted from Computer Ethics by Deborah Johnson [6], and are used with permission of the author. The first case was influenced by an earlier paper by Michael C. McFarland. [7] 2.

  9. (PDF) How to Do Computer Ethics—A Case Study: The ...

    cluster model can be applied, we present and analyze some privacy and security issues regarding the. Electronic Mall Bodensee, which is a World-Wide-Web-based business project in Central Europe. 1 ...

  10. Computer Ethics: A Case-Based Approach

    Computer Ethics: A Case-Based Approach. Computer Ethics: A Case-Based Approach teaches students to solve ethical dilemmas in the field of computing, taking a philosophical, rather than a legal, approach to the topic. It first examines the principles of Idealism, Realism, Pragmatism, Existentialism, and Philosophical Analysis, explaining how each might be adopted as a basis for solving ...

  11. Top 10 technology and ethics stories of 2020

    1. Technology firms come under scrutiny for ties to law enforcement. Following a massive international backlash against police racism and brutality sparked by the killing of George Floyd in ...

  12. PDF Scenarios for Computer Ethics Education

    During these steps, 20 different ethical scenarios regarding computer ethics were designed and sent to the academicians for having feedback. It can be thought that this paper is the first draft of a 22-month project. Keywords: computer ethics, case-based approach, electronic performance support system. 1. Computer Ethics.

  13. 9.8 Case Studies of Ethics

    The Case. Charlie and Carol are students at a university in a computer science program. Each writes a program for a class assignment. Charlie's program happens to uncover a flaw in a compiler that ultimately causes the entire computing system to fail; all users lose the results of their current computation.

  14. PDF Responsible Use of Technology: The Microsoft Case Study

    with similar intentions to benefit from its experience. This case study aims to promote discussion, critiqu. s, as well as efforts to build upon Microsoft's work. The World Economic Forum and its partners in this project hope more organizations not only operationalize ethics in their use of technology but al.

  15. Crisis Data: An Ethics Case Study

    An AI Ethics Case Study. Irina Raicu "Depression please cut to the chase." by darcyadelaide is marked with CC BY 2.0. In January 2022, Politico published an article about a nonprofit called Crisis Text Line, which offers support via text messages for people who are going through mental health crises. For years, the nonprofit had been collecting ...

  16. A Case Study for Computer Ethics in Context

    Aimed at addressing the difficulties associated with teaching often abstract elements of technical ethics, this book is an extended fictional case study into the complexities of technology and social structures in complex organizations. Within this case study, an accidental discovery reveals that the algorithms of Professor John Blackbriar are ...

  17. Ethics and Professional Responsibility in Computing

    August 23, 2007. Abstract. Computing professionals have ethical obligations to clients, employers, other professionals, and the public, in fulfilling their professional responsibilities. These obligations are expressed in codes of ethics, which can be used to make decisions about ethical problems.

  18. Computer Ethics: Some Case Study

    VARUN KUMAR. This document discusses four case studies related to computer ethics: 1) copying software, 2) data mining customer information, 3) freedom of expression online, and 4) gender representation in computer games. Each case study presents an ethical dilemma, outlines relevant details, and poses discussion questions about professionals ...

  19. How to Do Computer Ethics—A Case Study: The Electronic Mall Bodensee

    ential--textbooks in the field was Johnson's Computer Ethics (1985). The first chapter of this book was devoted primarily to utilitarian and Kantian theory, but the vast majority of the ethical analyses in the rest of the book ignore those theories. During the past decade, applied ethics--including computer ethics--has become methodologically

  20. Cyber Harassment

    Cyber Harassment. After a student defames a middle school teacher on social media, the teacher confronts the student in class and posts a video of the confrontation online. In many ways, social media platforms have created great benefits for our societies by expanding and diversifying the ways people communicate with each other, and yet these ...

  21. Social and Ethical Responsibilities of Computing

    We also help to support new courses, such as 24.133 (Experiential Ethics). Case studies: We commission and publish a series of peer-reviewed MIT Case Studies in Social and Ethical Responsibilities of Computing. The cases are brief, based on original research, and appropriate for use in undergraduate instruction across a range of existing ...

  22. Internet Ethics Cases

    Markkula Center for Applied Ethics. Focus Areas. Internet Ethics. Internet Ethics Cases. Find ethics case studies on topics in Internet ethics including privacy, hacking, social media, the right to be forgotten, and hashtag activism. (For permission to reprint articles, submit requests to [email protected] .)

  23. Case Studies

    Case Studies. More than 70 cases pair ethics concepts with real world situations. From journalism, performing arts, and scientific research to sports, law, and business, these case studies explore current and historic ethical dilemmas, their motivating biases, and their consequences. Each case includes discussion questions, related videos, and ...

  24. New Case Study: Unmanaged GTM Tags Become a Security Nightmare

    Download the full case study here. The task was the management of its Google Tag Manager usage. The team may have felt that marketing and growth were their priorities and so this move made sense, but security is one of those strands that runs through everything. The consequence of outsourcing this work was a data breach because the contractor ...

  25. IBM Blog

    Highlights by topic. Artificial intelligence Analytics Business automation Cloud Compute and servers IT automation Security and identity Sustainability. Featured. May 31, 2024. Generative AI can revolutionize tax administration and drive toward a more personalized and ethical future. News and thought leadership from IBM on business topics ...