Case Analysis/Discussion #1: Artificial Intelligence and Migration

Description

Begin by thoroughly reading the case study attached to understand the context, key players, and issues involved. Identify and note the main issues or challenges presented in the case. Analyze the effectiveness and ethical considerations of AI applications in migration and border security. Assess the responses of governments and organizations to the challenges and criticisms raised against AI implementations. And answer the questions below:Case Questions:Effectiveness of AI in Visa Processing: How did the implementation of AI in the UK’s visa approval process affect efficiency and fairness? Were there any biases or discrimination evident in the process?Ethical Considerations: What ethical concerns arise from the use of AI in migration control, especially in light of the incidents with African visa applicants and the potential biases in the system?Governmental Responses and Transparency: How did the UK government respond to allegations of bias and discrimination in its AI system? What does this indicate about governmental transparency and accountability in the use of AI?Comparison with Other AI Applications: How does the UK’s use of AI in visa processing compare to other examples of AI in migration control, such as the EU’s iBorderCtrl program?Potential for Improvement: In light of the challenges faced, how can AI systems be improved to ensure fairness and reduce bias in migration control?Long-term Implications: What are the long-term implications of relying on AI for migration and border security? How can governments balance efficiency, security, and ethical considerations?

Don't use plagiarized sources. Get Your Custom Assignment on
Case Analysis/Discussion #1: Artificial Intelligence and Migration
From as Little as $13/Page

Unformatted Attachment Preview

1
Artificial Intelligence and Migration
Joseph Rice
Lessons in Governance
A primary example of bias and discrimination in algorithms has been witnessed in the
United Kingdom’s implementation of AI in their visa approval process. The Home Office, the
British government department regulating immigration and passports, began using an algorithm to
assess visa applications in 2015. Described as a ‘streaming tool,’ the system assigned either a red,
amber, or green rating to visa applicants to correspond with the applicant’s level of ‘risk,’ with
riskier applicants receiving a red rating and safe applicants receiving a green rating. 1 If the
applicant received a green rating, a decision is made quickly and only reviewed by a second person
if the first denied the application. If the applicant is given a red rating, human reviewers are given
more time to review the applications. If the application is approved, a second person will review
it as well; however, if the visa is rejected by the first reviewer, it is not reviewed again.2
The implementation of the algorithm is one way in which governments are attempting to
become more efficient. Government administration, and thus inefficiency, is most apparent in
clerical and repetitive work. Take, for example, the process of approving visas. Paperwork must
be reviewed for every single applicant, which takes lots of time and manpower. In 2021, the United
Kingdom received 628,698 applications for visitor visas. While a large number, this number is
77% lower than that of 2019 before the COVID-19 pandemic, which would have seen nearly two
million applications.3 Furthermore, the approval process for visas to the UK takes three to six
weeks, exhibiting the amount of time necessary for each application to be reviewed and processed.
Various applications of the UK streaming tool have seen African visa applicants turned
down for, seemingly, no reason. In November 2018, 17 delegates from Africa and Asia invited to
2
attend a conference held by the Women Leaders in Global Health were denied visas. This event
led to condemnation from scientists, including the director of the London School of Hygiene &
Tropical Medicine and a discoverer of the Ebola virus, Peter Piot, who penned a letter to the Home
Secretary.4In April 2019, a team of six Sierra Leonean Ebola researchers was denied visas to attend
a training program in the UK. The same month, at a London School of Economics Africa Summit,
was absent of 24 of the 25 researchers who were invited due to visa denials.5
An investigation into this pattern of seemingly discriminatory visa denials was initiated by
the All-Party Parliamentary Group (AAPG) for Africa, a group of parliamentarians who seek to
foster beneficial relationships between African countries and the UK. 6 The results were apparent:
African visa applicants were denied at higher rates than applicants from other parts of the world.
While the overall refusal rate for visas from 2016 to September 2018 lay at 12 percent, African
requests were denied at a rate of 27 percent, 15 percentage points higher than that of Middle
Eastern and Asian applicants and 23 percentage points higher than North American applicants. 7
UK government officials initially defended the visa approval process against claims of
institutional racism. In response to the incidents of April 2019, a Home Office spokesperson said
they welcome international academics and recognize their contributions but clarified all visa
applications are considered on their merits and are held to the immigration rules set by the
government.8 Then-Prime Minister Theresa May was questioned about the report conducted by
the AAPG for Africa and said visa applications must be evaluated thoroughly and argued the
approval rate for African visa applicants was higher than it had been in the previous 10 years.
Similarly, the Home Office said that they were seeing visa applications from African applicants at
their highest rate since 2013 and that visa applicants are not discriminated against based on “age,
gender, religion, or race.”9
3
The algorithm had the potential to be particularly prejudiced because, since it relied upon
machine learning to refine its ability to assess applicant risk based on prior cases, any hint of
discrimination could be perpetually replicated. If the early implementation of the algorithm was
influenced by Home Office norms, such as more heavily analyzing visa applicants from Africa,
the algorithm would then learn that African applicants were riskier and required more intensive
analysis. Therefore, since there had already been a suspected amount of prejudice against African
applicants, immigrant rights groups could have a strong case to prevent further discrimination
against marginalized groups.
As a result of the apparent discrimination against African visa applicants, two organizations
filed a legal complaint with the British High Court: the Joint Council for the Welfare of Immigrants
(JCWI) and Foxglove Legal, a “technology justice advocacy group.” 10 The complaint alleged that
the Home Office’s visa approval process violated the Equality Act 2010 and was racially
discriminatory.11 However, before the algorithm’s alleged discrimination could be evaluated by
the Court, the Home Department announced that from August 17, 2020, they would no longer use
the algorithm to screen visa applicants. While stating they would redesign their visa approval
process, they denied the allegations brought forth by the JCWI and thus denied the algorithm was
programmed to be discriminatory. 12
Since the Home Office pulled the algorithm before it officially went to court, they avoided
having to disclose more details about the specific functions and design of the algorithm. 13
Therefore, there can be no public investigation into whether the algorithm was truly discriminatory
as the claims suggested. While this can be seen as a win by immigration rights activists, it raises
important questions: Why would the Home Office publicly deny discrimination while also ceasing
their use of the algorithm? Does this set a dangerous precedent for claims against potential future
4
instances of discrimination by governmental agencies using algorithms in administrative work?
Will the updated algorithm be any better? None of these questions can be answered with
confidence, but they remind us of the complexity and fallibility of human-designed algorithms and
technology.
Implementing AI in government functions aids in efficiency and can decrease the amount
of human time needed to spend on a certain task. In 2019, the European Union began testing an
AI program called iBorderCtrl. Essentially a lie detector test, iBorderCtrl uses AI to examine a
migrant’s gestures when answering questions related to their journey or their possessions. If
determined to be telling the truth, the traveler is free to pass the border; however, if the machine
detects suspicion of lying, the traveler will be subject to further review by a human agent and
biometric data collection.14
All of this came at a time when migration to Europe has been growing at a steady rate and
more than 700 million people entered the EU annually. Additionally, political pressure on
European lawmakers to increase awareness of the migrant movement within Europe grew
alongside the increase in migration. From 2021 to 2027, the European Commission proposed
nearly 35 billion euros to be spent on border control and migration management. While iBorderCtrl
proposed to increase the speed of traveler intake, the AI inevitably drew concern. The head of the
EU’s data protection watchdog, Giovanni Buttarelli, reported he was concerned that the system
may discriminate against people based on their ethnicity or country of origin, as the system
functions primarily on facial features and could become biased based on skin color. 15
As a result of the concern for the ethics of iBorderCtrl, Patrick Breyer, a member of the
European Parliament, initiated a legal dispute against the European Research Executive Agency
to unveil classified documents on the results of the iBorderCtrl trial and its ethical justifiability.
5
Certain documents were deemed necessary to declassify to the public, as there ought to be more
transparency and democratic oversight of the development of new surveillance technologies.
However, the court also ruled that certain documents could not be released because they protect
commercial interests and knowledge. 16
While the technology was only used in an experimental phase, many of the concerns did
not come to fruition. However, that does not mean that similar technology will emerge in the
future, and whether or not there will be adequate governance and public knowledge of it. For
example, much like Breyer, researchers at the Hermes Center for Transparency and Digital Human
Rights used freedom of information laws in an attempt to review internal documents about
iBorderCtrl. Many of the pages were heavily redacted with some completely blacked out, which
raises alarms as to what extent technologies can be protected for commercial interests.17
Is it ethical to leave someone’s chances of entering a country up to a computer? Can a
computer truly understand the nuances of human expressions? In reality, a system like iBorderCtrl
does have many useful applications. In an increasingly globalized world where traveling is easier
than ever, perhaps certain technologies are necessary to decrease congestion and improve
efficiency. More applications of similar technologies are sure to be witnessed in the future and
whether they will be adequately governed is up to the people.
Lessons in Detection
Office settings, however, are not the only realms of governance that can be improved upon
with AI. Governments have begun implementing AI in national security, installing devices using
AI to surveil their borders. Of high importance to the United States, strong borders are important
to national sovereignty. However, both the border with Mexico and Canada is expansive and,
especially in the south, subject to harsh conditions. Implementing technologies utilizing AI allow
6
for both efficiency in the deployment of human agents and, possibly, greater efficiency in detecting
illegal activities.
The use of technology in border security has been explored for several decades. In 2006,
President George W. Bush initiated the Secure Border Initiative Network (SBInet) to establish a
‘virtual wall’ along the southern border. The system led by Boeing ultimately failed, however,
after five years of development and more than one billion dollars being spent to surveil 53 miles
of the nearly 2,000-mile border between the United States and Mexico. The system was unable to
distinguish objects apart, such as humans versus animals, and did little to ease or improve the
process of border security.18
While the Trump administration focused heavily on the construction of a physical wall
between the United States and Mexico, there has also been significant development in the
construction of a technological border. Since the beginning of a five-year deal with Californiabased defense contractor Anduril, around 175 autonomous surveillance towers (AST) have been
deployed along the border. The towers employ cameras, radar, and thermal imaging to detect
movement near the border and determine what the object is, as can be observed in Exhibit 1. With
its machine learning technology, the cameras can determine patterns; for example, an ATS on a
private ranch no longer analyzes the owner’s pickup truck when he drives past. 19
The towers are equipped with cameras designed with a three-mile radius and night vision
technology, able to both detect and rescue migrants making the trek to the southern US border.
When a group of migrants moves out of frame for one tower, Customs and Border Patrol (CBP) is
still able to follow the migrants as the towers can communicate and share information. 20 They have
been able to both replace and complement the work of Border Patrol agents. Before the
implementation of the ASTs, agents would have to physically survey the desert with binoculars.
7
However, the cameras can now last 24 hours from solar power provided by only one hour of
sunlight.21
The implementation of AI-powered cameras on national borders, while an impressive use
of AI, may not be as successful as proponents claim. Journalists have asked both Border Patrol
and Anduril whether the technology is successful at reducing illegal border crossings. When asked,
CBP answered that effectiveness was determined by specifications such as reliability or
survivability but did not answer whether fewer migrants crossed the border illegally; Anduril
responded that determining effectiveness was a question for Border Patrol, not them. 22
The usage of ASTs has come under ethical questions as well. First devised under the
Clinton administration, the concept of ‘prevention through deterrence’ has continued to be
implemented by US Border Patrol. The idea, originally, was to deter migrants from crossing the
border near cities, so walls were erected to push migrants toward the desert, using the potential of
death as a deterrent. The ASTs use a similar idea in which migrants, hoping to stay out of detection
by the cameras, will follow different routes where the journey is more difficult and there is a greater
likelihood they become apprehended by CBP.23
Reports on the development of SBInet show that migrant deaths became more concentrated
and occurred in higher numbers as a result of the construction of surveillance towers, as displayed
in Exhibit 2.24 Since migrants were deterred away from certain migratory corridors, they were
funneled into more dangerous areas, walking farther and increasing exertion. Therefore, we can
say that, to an extent, technological border surveillance can lead to an increase in migrant mortality.
Some might ask, is it necessarily CBP’s fault these cameras have possibly led to greater
migrant casualties? On the other hand, at what point does protecting national sovereignty become
state-sanctioned violence?
8
A perhaps less controversial use of AI on a US border is the development of the Northern
Border Remote Video Surveillance System (NBRVSS). NBRVSS operates around 22 sites of
cameras and radars to surveil the 360-mile stretch of the US-Canada water border from Buffalo,
NY to Port Huron, MI. Aimed to catch drug smugglers, the system can detect vessels departing
from Canada and analyze their movements. Since the system is always watching, it can learn what
traditional boat movements look like and thus, when applicable, can direct CBP officers to a
suspicious vessel detected by irregular movement. 25
The NBRVSS, furthermore, can go beyond detecting unusual movement. If, for example,
jet ski speeds across the international waterway or a boat sails along the Canadian border to then
speed to the US side quickly, both signs of suspicious behavior, CBP can use the camera to see
what the boat looks like, determine the number of passengers, and conduct background checks on
the vehicle’s registration number. 26 Therefore, the system combines both new, AI technology with
traditional border security operations. However, efficiency comes in the CBP’s ability to have
access to both more substantial and meaningful data and possessing the ability to further analyze
those vessels which were deemed suspicious by the AI system.
While not documented yet, the NBRVSS could have the potential to call ethical questions
into order. Since the system only detects unusual movement, in which the officers themselves then
decide whether or not to pursue the vessel, any number of biases can come up. For example, the
system can show the people on board the vessel: what if they become profiled? Will certain groups
of people receive more attention than others? At the end of the day, which vessels are ultimately
pursued rather than deemed usual? These are hypothetical questions now, but may lead to trends
in the future, as were witnessed with the UK algorithm.
AI: Friend or Foe?
9
Most of these applications demonstrate that, while AI has the potential to improve the
efficiency of certain government functions, it often comes at the expense of the people. Visas,
often annoying forms which take both times to apply for and to evaluate, can logically be sped
along with AI. However, when governments do not adequately prepare for potential bias or
discrimination, the new applications can prove to be problematic.
National sovereignty is a critical function of the modern state. Knowing what goes in and
out of a country is important to national security, and countries go to great lengths to prioritize
their citizens. One major policy area of the United States is to prevent both the illegal migration
of people across its borders and the smuggling of illegal drugs and items. However, studies hint
that its AI applications on the southern border may not be as efficient as they hoped it to be and,
whether intentional or not, have led to the deaths of migrants attempting to cross the border.
As technology and AI progress, so too will the applications used by governments. In the
future, they are bound to improve and become more targeted, efficient, and successful. In the
meantime, however, is it worth it to implement AI in government when so many ethical questions
are at stake? How transparent must governments be in these applications? One thing is for sure:
governments will continue to seek out ways to improve the efficiency and services they can
provide. Whether they can be fair and just, however, will continue to be assessed for the decades
to come.
From governance to detection, AI can serve multiple functions by government bodies.
However, while possessing positive attributes, many new implementations of new technologies
are subject to scrutiny and questions of privacy and security. The prior examples of AI applications
draw upon two major questions: how far can governments go in the automation of government
functions? And, how transparent will governments be in their new implementations of AI?
10
Thus, is AI a friend or a foe? Do the potential benefits of illegal migrant apprehension
outweigh the cost of a rise in fatalities? Will governments use AI to mask more explicit
discrimination and blame it on AI, benefitting from the general public’s overall lack of
technological literacy? What is more important, getting the job done quickly, or getting the job
done right?
The implementations of AI in governments are bound to increase over the coming decades.
Inevitably, they will draw concerns about national security, data privacy, and prejudice. One thing
is certain: without adequate transparency or proper implementation, people around the world, in
both democracies and authoritarian regimes, will likely come to realize the immense capabilities
and threats AI possesses in governance.
11
Exhibit 1: AST Night Vision and Object Detection
Exhibit
2:
Mortality
Distribution
Before
Implementation
of
SBInet
vs
After
12
Endnotes
1. JCWI. “We Won! Home Office to Stop Using Racist Visa Algorithm.” Joint Council for the
Welfare of Immigrants, August 4, 2020. https://www.jcwi.org.uk/news/we-won-homeoffice-to-stop-using-racist-visa-algorithm.
2. Collins, Katie. “UK Agrees to Redesign ‘Racist’ Algorithm That Decides Visa Applications.”
CNET. CNET, August 4, 2020. https://www.cnet.com/tech/services-and-software/uk-govagrees-to-redesign-racist-algorithm-that-decides-visa-applications/.
3. “How Many People Come to the UK Each Year (Including Visitors)?” GOV.UK. Accessed
April 18, 2022. https://www.gov.uk/government/statistics/immigration-statistics-yearending-december-2021/how-many-people-come-to-the-uk-each-year-includingvisitors#:~:text=Visitors%20to%20the%20UK,Many%20nationalities%2C%20including&text=Nonetheless%2C%20from%20those%20n
ationalities%20required,to%20the%20COVID%2D19%20pandemic.
4. Kelland, Kate. “Scientists Angry at UK Visa Denials for African, Asian Researchers.” Reuters.
Thomson Reuters, November 8, 2018. https://www.reuters.com/article/us-science-britainvisas/scientists-angry-at-uk-visa-denials-for-african-asian-researchers-idUSKBN1ND2B0.
5. Grant, Harriet. “MPs Say ‘Embarrassing and Insulting’ UK Visa System Damages Africa
Relations.” The Guardian. Guardian News and Media, July 17, 2019.
https://www.theguardian.com/global-development/2019/jul/17/mps-say-embarrassing-andinsulting-uk-visa-system-damages-africa-relations.
6. “All Party Parliamentary Group for Africa.” Royal African Society, March 11, 2020.
https://royalafricansociety.org/whatwedo/policy/appga/.
7. Bulman, May. “African People Twice as Likely to Be Refused UK Visas, Damning Report
Finds.” The Independent. Independent Digital News and Media, July 17, 2019.
https://www.independent.co.uk/news/uk/home-news/uk-africa-visas-home-office-deniedappg-immigration-a9008106.html.
8. Grant, “UK Visa System Damages Africa Relations.”
9. Bulman, “African People Twice as Likely to Be Refused UK Visas.”
10. Feathers, Todd. “U.K. Immigration Lawyers Fought a Racist Algorithm and Won.” VICE,
August 5, 2020. https://www.vice.com/en/article/935gka/uk-immigration-lawyers-fought-aracist-algorithm-and-won.
11. JCWI. “We Won!”
12. Bulman, May. “Home Office Scraps ‘Racist’ Visa Algorithm.” The Independent. Independent
Digital News and Media, August 4, 2020. https://www.independent.co.uk/news/uk/homenews/home-office-visa-application-algorithm-racist-a9654016.html.
13. Feathers. “U.K. Immigration Lawyers.”
14. Deahl, Dani. “The EU Plans to Test an AI Lie Detector at Border Points.” The Verge. The
Verge, October 31, 2018. https://www.theverge.com/2018/10/31/18049906/eu-artificialintelligence-ai-lie-detector-border-points-immigration.
15. Gallagher, Ryan, and Ludovica Jona. “We Tested Europe’s New Lie Detector for Travelers and Immediately Triggered a False Positive.” The Intercept. The Intercept, July 26, 2019.
https://theintercept.com/2019/07/26/europe-border-control-ai-lie-detector/.
13
16. “European Court Supports Transparency in Risky EU Border Tech Experiments.” European
Digital Rights (EDRi). Accessed April 18, 2022. https://edri.org/our-work/european-courtsupports-transparency-in-risky-eu-border-tech-experiments/.
17. Gallagher and Jona, “We Tested Europe’s New Lie Detector.”
18. Stone, Louis. “Anduril Raises $200m, Wins Contract for … – Ai Business.” AI Business, July
6, 2020. https://aibusiness.com/document.asp?doc_id=762178.
19. Miroff, Nick. “Powered by Artificial Intelligence, ‘Autonomous’ Border Towers Test
Democrats’ Support for Surveillance Technology.” The Washington Post. WP Company,
March 12, 2022. https://www.washingtonpost.com/national-security/2022/03/11/mexicoborder-surveillance-towers/.
20. Resendiz, Julian. “Border Patrol Adds Artificial Intelligence Cameras to Security Arsenal.”
BorderReport. BorderReport, March 26, 2022. https://www.borderreport.com/hottopics/immigration/border-patrol-adds-artificial-intelligence-cameras-to-securityarsenal/#:~:text=The%20cameras%20have%20a%203,may%20be%20carrying%20a%20w
eapon.&text=The%20towers%20automatically%20%E2%80%9Chand%20off,out%20of%
20one%20camera’s%20sight.
21. Ruiten, Roxy Van. “Border Patrol Has New Tool to Help Secure the Border: Autonomous
Surveillance Towers.” KTSM 9 News. KTSM 9 News, October 2, 2021.
https://www.ktsm.com/local/border-patrol-has-new-tool-to-help-secure-the-borderautonomous-surveillance-towers/.
22. Phippen, J. Weston. “’a $10-Million Scarecrow’: The Quest for the Perfect ‘Smart Wall’.”
POLITICO, December 10, 2021. https://www.politico.com/news/magazine/2021/12/10/usmexico-border-smart-wall-politics-artificial-intelligence-523918.
23. Ibid.
24. Boyce, Geoffrey Alan, and Samuel Norton Chambers. “The Corral Apparatus:
Counterinsurgency and the Architecture of Death and Deterrence along the Mexico/United
States
Border.”
Geoforum
120
(2021):
1–13.
https://doi.org/10.1016/j.geoforum.2021.01.007.
25. Koscak, Paul. “CBP Artificial Intelligence.” U.S. Customs and Border Protection.
Accessed April 18, 2022. https://www.cbp.gov/frontline/cbp-artificial-intelligence.
26. Ibid.

Purchase answer to see full
attachment