Draft:Psychology of Cybersecurity
![]() | Review waiting, please be patient.
This may take 8 weeks or more, since drafts are reviewed in no specific order. There are 2,841 pending submissions waiting for review.
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
Reviewer tools
|
Submission declined on 13 September 2025 by Aydoh8 (talk).
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
This draft has been resubmitted and is currently awaiting re-review. | ![]() |
Psychology of cybersecurity is an interdisciplinary field that studies the psychological and behavioral factors influencing security behaviors and vulnerabilities in digital environments. It examines why individuals fall for phishing attacks, engage in password sharing, ignore security warnings, and how organizations can design systems to promote better security practices.[1]
Overview
[edit]Research in the field is supported by academic institutions focused on human-computer interaction and security, such as the CyLab Usable Privacy and Security Laboratory at Carnegie Mellon University and the Information Security Research Group at University College London.[2]
The field is distinct from cyberpsychology, which studies broader online behavior and digital interactions. Psychology of cybersecurity specifically focuses on security-related decisions, vulnerabilities, and protective behaviors in digital environments.
History
[edit]The field emerged from multiple disciplines:
Early foundations (1960s-1980s)
[edit]Early observations of human-computer interaction noted security challenges. For instance, researchers in the 1960s documented password sharing among users of time-shared computer systems.[3] Stanley Milgram's obedience experiments, conducted between 1961 and 1974, have been subsequently analyzed for their implications in understanding compliance with authority figures in social engineering attacks.[4]
Social engineering era (1990s-2000s)
[edit]The 1990s saw increased recognition that technological security measures could be bypassed through human manipulation. Kevin Mitnick's use of social engineering to gain unauthorized access to computer systems highlighted that "the human factor is often the weakest link."[5]
Academic establishment (2000s-2010s)
[edit]The field gained academic structure in the 2000s. Angela Sasse at UCL introduced the concept of the "compliance budget" in 2008, suggesting that users have a finite capacity for security tasks, which challenged assumptions that more security training would automatically improve outcomes.[6] Cormac Herley at Microsoft Research argued in 2009 that users rationally reject security advice when the perceived costs outweigh the benefits.[7]
Theoretical foundations
[edit]Cognitive psychology
[edit]Dual-process theory
[edit]Daniel Kahneman's framework of System 1 (fast, automatic) and System 2 (slow, deliberate) thinking is used to explain security failures. Phishing attacks often succeed because they trigger quick, heuristic-based judgments from System 1.[8] Many security decisions are made under time pressure or distraction, conditions that favor the use of automatic processing.
Cognitive biases
[edit]Several cognitive biases have been identified as relevant to cybersecurity:[9]
- Optimism bias: The tendency to underestimate one's personal risk of experiencing a security incident.
- Confirmation bias: Seeking information that confirms pre-existing beliefs about security threats.
- Availability heuristic: Overestimating the likelihood of risks that are more memorable, such as recently publicized data breaches.
Social psychology
[edit]Social engineering principles
[edit]Robert Cialdini's principles of influence are frequently applied to understand social engineering tactics.[10]
Principle | Attack technique | Example |
---|---|---|
Reciprocity | Quid pro quo | "I helped you, now you help me with password" |
Commitment | Foot-in-door | Small request escalates to credential theft |
Social proof | Fake consensus | "Everyone in finance uses this link" |
Authority | Impersonation | CEO fraud, fake IT support |
Liking | Rapport building | Befriending before attack |
Scarcity | Urgency/fear | "Account expires in 1 hour!" |
Behavioral economics
[edit]Concepts from behavioral economics, such as Herbert A. Simon's bounded rationality, help explain security decisions. Users often "satisfice," choosing passwords that are just strong enough to meet minimum requirements rather than optimizing for security.[11] Users perform informal cost-benefit analyses, which can lead to the bypassing of security controls if they are perceived as too cumbersome.
Psychoanalytic foundations
[edit]Psychoanalytic theory has been applied to understand unconscious factors in cybersecurity. Wilfred Bion's basic assumptions theory describes how groups under stress develop dependency on security solutions, fight-flight responses to threats, or pairing fantasies about future technological salvation.[12]
Melanie Klein's concept of splitting manifests as organizations idealizing internal systems while demonizing external attackers, creating critical security blind spots.[13] Carl Jung's shadow theory explains how organizations project their own vulnerabilities onto attackers, while Donald Winnicott's transitional space concept helps understand reality testing in digital environments.[14]
Pre-cognitive vulnerabilities
[edit]Neuroscience foundations
[edit]Research indicates that decision-making begins 300-500 milliseconds before conscious awareness, suggesting security decisions are substantially influenced by pre-cognitive processes.[15] Functional MRI studies show amygdala activation (threat response) occurs before prefrontal cortex engagement (rational analysis).[16]
Unconscious processes
[edit]Organizations develop "social defense systems" against anxiety that create systematic security vulnerabilities.[17] These unconscious dynamics include:
- Projection: Attributing internal threats to external attackers
- Splitting: Dividing systems into "all good" or "all bad" categories
- Repetition compulsion: Repeating insecure patterns despite negative outcomes
Key vulnerabilities
[edit]Authority and trust
[edit]Research inspired by the Milgram experiment suggests that individuals are highly susceptible to requests from perceived authority figures. In cybersecurity contexts, this can manifest as compliance with requests from impersonated IT staff or executives.[18]
Cognitive fatigue
[edit]Repeated exposure to security warnings can lead to alert fatigue, where users habitually dismiss notifications without careful consideration.[19] The cognitive load associated with managing numerous passwords can result in insecure practices such as password reuse or the use of weak, memorable passwords.[20]
Group dynamics
[edit]Organizations exhibit collective vulnerabilities through group dynamics:[21]
- Groupthink: Suppressing security concerns to maintain group harmony
- Diffusion of responsibility: Assuming others will handle security tasks
- Risky shift: Groups making riskier decisions than individuals
- Basic assumption groups: Dependency, fight-flight, and pairing responses to threats
Psychological attack techniques
[edit]Pretexting
[edit]Pretexting involves creating a fabricated scenario to obtain information. Experimental studies have demonstrated that this technique can achieve high success rates in eliciting compliance from targets.[22]
Phishing psychology
[edit]Phishing attacks exploit multiple psychological mechanisms, including visual deception (e.g., lookalike domains), creating a sense of urgency, and impersonating authority figures. Even with training, click rates for phishing emails remain significant, with highly targeted spear phishing campaigns showing particularly high success rates.[23]
Defensive psychology
[edit]Training approaches
[edit]Traditional security awareness training has shown limited effectiveness in changing long-term behavior. More effective approaches include embedded training (providing guidance at the point of risk) and simulated phishing exercises with immediate feedback.[24]
Nudging
[edit]Nudge theory has been applied to improve security behaviors. Techniques include setting secure options as the default, simplifying security procedures, and using social proof (e.g., indicating that most colleagues use two-factor authentication).[25]
Psychological assessment frameworks
[edit]Emerging frameworks propose systematic assessment of psychological vulnerabilities through indicators spanning cognitive load patterns, group dynamics, stress responses, and unconscious processes. These frameworks aim to identify vulnerabilities before they can be exploited by attackers.[26]
Organizational factors
[edit]Security culture
[edit]Organizations have measurable security cultures affecting behavior:[27]
- Compliant: Rule-following but inflexible
- Proactive: Anticipating threats, continuous improvement
- Reactive: Responding only after incidents
- Adaptive: Learning from experiences and adjusting behaviors
Leadership influence
[edit]Executive behavior sets security tone throughout organizations. Leaders who visibly follow security procedures can increase compliance by up to 40%, while exceptions for executives may normalize security bypassing behaviors.[28]
Emerging areas
[edit]AI and security psychology
[edit]As artificial intelligence integrates into security operations, new psychological factors emerge:[29]
- Anthropomorphization: Attribution of human qualities to AI systems leading to over-trust
- Automation bias: Over-reliance on AI recommendations reducing human vigilance
- Algorithm aversion: Rejecting AI systems after single errors despite overall accuracy
- AI authority transfer: Uncritical acceptance of algorithmic decisions
Remote work psychology
[edit]The shift to remote work introduces new psychological considerations:
- Reduced security vigilance in home environments
- Blurred boundaries between work and personal life increasing risky behaviors
- Isolation potentially increasing susceptibility to social engineering
- Video conferencing fatigue degrading security decision quality
Cryptocurrency psychology
[edit]Cryptocurrency environments create unique psychological vulnerabilities:
- Fear of missing out (FOMO) driving impulsive decisions
- Complexity intimidating users into poor security practices
- Irreversible transactions increasing decision stress
- Anonymity reducing perceived consequences of security lapses
Measurement and assessment
[edit]Behavioral indicators
[edit]Measurable security behaviors include password reuse rates, response times to phishing attempts, reporting rates for suspicious activities, and adoption rates of optional security features like two-factor authentication.[30]
Psychological assessment tools
[edit]Several validated instruments measure security-related psychological factors:
- Security Behavior Intentions Scale (SeBIS)
- Human Aspects of Information Security Questionnaire (HAIS-Q)
- Security Culture Assessment Tool (SCAT)
- Cybersecurity Risk Perception Scale (CRPS)
Criticism and limitations
[edit]Blaming users
[edit]Critics argue that focusing on psychological factors may shift responsibility from poor system design to individual users. The concept of "human error" often reflects "design error" in security systems.[31]
Privacy concerns
[edit]Psychological profiling for security purposes raises ethical questions regarding employee monitoring, potential discrimination based on psychological states, and behavioral data collection without explicit consent.
Cultural considerations
[edit]Most research originates from Western contexts, while security behaviors vary significantly across cultures in areas such as authority responses, information sharing practices, and privacy expectations.
See also
[edit]- Cyberpsychology
- Social engineering (security)
- Human factors and ergonomics
- Usable security
- Information security awareness
- Password psychology
- Cognitive bias
- Behavioral economics
- Psychoanalytic theory
References
[edit]- ^ Acquisti, Alessandro; Gross, Ralph (2006). "Imagined Communities: Awareness, Information Sharing, and Privacy on the Facebook". Privacy Enhancing Technologies: 36–58. doi:10.1007/11957454_3.
- ^ Whitten, Alma; Tygar, J. D. (1999). "Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0". Proceedings of the 12th USENIX Security Symposium. 12: 169–184.
- ^ Anderson, Ross (2020). Security Engineering (3rd ed.). Wiley. ISBN 978-1119642787.
- ^ Milgram, Stanley (1974). Obedience to Authority. Harper & Row. ISBN 978-0061765216.
- ^ Mitnick, Kevin; Simon, William (2002). The Art of Deception. Wiley. ISBN 978-0471237129.
- ^ Beautement, Adam; Sasse, M. Angela; Wonham, Mike (2008). "The compliance budget: managing security behaviour in organisations". Proceedings of the 2008 workshop on New security paradigms. ACM. pp. 47–58. doi:10.1145/1595676.1595684.
- ^ Herley, Cormac (2009). "So long, and no thanks for the externalities". NSPW '09. pp. 133–144. doi:10.1145/1719030.1719050.
- ^ Kahneman, Daniel (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. ISBN 978-0374275631.
- ^ Canfield, Clarence; Duane, Alexander; Kara, Ibrahim (2022). "A Taxonomy of Cognitive Biases Impacting Cyber Security Decisions". Computers & Security. 114: 102579. doi:10.1016/j.cose.2021.102579.
{{cite journal}}
: CS1 maint: article number as page number (link) - ^ Cialdini, Robert (2006). Influence: The Psychology of Persuasion (Revised ed.). Harper Business. ISBN 978-0061241895.
- ^ Simon, Herbert (1956). "Rational choice and the structure of the environment". Psychological Review. 63 (2): 129–138. doi:10.1037/h0042769. PMID 13310708.
- ^ Bion, Wilfred R. (1961). Experiences in Groups. Tavistock Publications.
- ^ Klein, Melanie (1946). "Notes on Some Schizoid Mechanisms". International Journal of Psychoanalysis. 27: 99–110.
- ^ Jung, Carl G. (1969). The Archetypes and the Collective Unconscious. Princeton University Press.
- ^ Libet, Benjamin; Gleason, C. A.; Wright, E. W.; Pearl, D. K. (1983). "Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity". Brain. 106 (3): 623–642. doi:10.1093/brain/106.3.623.
- ^ LeDoux, Joseph E. (2000). "Emotion Circuits in the Brain". Annual Review of Neuroscience. 23: 155–184. doi:10.1146/annurev.neuro.23.1.155.
- ^ Menzies Lyth, Isabel (1960). "A Case-Study in the Functioning of Social Systems as a Defence Against Anxiety". Human Relations. 13 (2): 95–121. doi:10.1177/001872676001300201.
- ^ Gratian, Morgan; Banday, Mohammad; Simmons, Justin; Hyatt, Robert (2020). "The Persuasive Power of Phishing: A Psychological Analysis of Authority and Urgency". Journal of Cybersecurity Research. 5 (2): 112–125.
- ^ Akhawe, Devdatta; Felt, Adrienne Porter (2013). "Alice in Warningland". 22nd USENIX Security Symposium. pp. 257–272.
- ^ Wash, Rick; Rader, Emanuel; Berman, Rachel; Wellmer, Zoe (2016). "Understanding Password Choices: How Frequently Entered Passwords Are Re-used Across Websites". Proceedings of the Symposium on Usable Privacy and Security (SOUPS). 12.
- ^ Kernberg, Otto F. (1998). Ideology, Conflict, and Leadership in Groups and Organizations. Yale University Press.
- ^ Bullee, Jan-Willem; Montoya, Linda; Junger, Marianne; Hartel, Pieter (2018). "The effect of a pretext phone call on the success of a spear phishing attack". Journal of Investigative Psychology and Offender Profiling. 15 (2): 198–214. doi:10.1002/jip.1505.
- ^ Canfield, Clarence; Duane, Alexander; Kara, Ibrahim (2022). "A Taxonomy of Cognitive Biases Impacting Cyber Security Decisions". Computers & Security. 114: 102579. doi:10.1016/j.cose.2021.102579.
{{cite journal}}
: CS1 maint: article number as page number (link) - ^ Kumaraguru, P. (2010). "Teaching Johnny not to fall for phish". ACM Transactions on Internet Technology. 10 (2): 1–31. doi:10.1145/1754393.1754396.
- ^ Sunstein, Cass R. (2014). "Nudging: A Very Short Guide". Journal of Consumer Policy. 37 (4): 583–588. doi:10.1007/s10603-014-9273-1.
- ^ Canale, Giuseppe (2024). "The Cybersecurity Psychology Framework: A Pre-Cognitive Vulnerability Assessment Model". Preprint.
- ^ Kirlappos, Iacovos; Sasse, M. Angela (2014). "What Makes a Good Password? The Effect of Security Culture on Password Behavior". Proceedings of the Symposium on Usable Privacy and Security (SOUPS): 257–268.
- ^ Crossler, Robert E.; Belanger, France (2014). "The Effects of Security Education Training on Security Compliance". Journal of Information Systems. 28 (2): 41–60. doi:10.2308/isys-50702.
- ^ Zhang, Bonnie; Andras, Peter (2022). "The Psychology of Human-AI Interaction in Cybersecurity". Computers & Security. 112: 102528. doi:10.1016/j.cose.2021.102528.
{{cite journal}}
: CS1 maint: article number as page number (link) - ^ Egelman, Serge; Peer, Eyal (2015). "The Myth of the Average User". Proceedings of the Symposium on Usable Privacy and Security (SOUPS): 16–30.
- ^ Herley, Cormac (2009). "So Long, and No Thanks for the Externalities: The Rational Rejection of Security Advice by Users". Proceedings of the New Security Paradigms Workshop: 133–144. doi:10.1145/1719030.1719050.
Further reading
[edit]- Anderson, Ross (2020). Security Engineering: A Guide to Building Dependable Distributed Systems (3rd ed.). Wiley. ISBN 978-1119642787.
- Hadlington, Lee (2017). Cybercognition: Brain, Behaviour and the Digital World. SAGE. ISBN 978-1473953307.
- "Special Issue: Human Factors in Cybersecurity". Computers & Security. 106. 2021.
- Schneier, Bruce (2015). Secrets and Lies: Digital Security in a Networked World. Wiley. ISBN 978-1119092438.
- Bion, Wilfred R. (1961). Experiences in Groups. Tavistock Publications.
- Klein, Melanie (1975). Envy and Gratitude and Other Works. Hogarth Press.
External links
[edit]- CyLab Usable Privacy and Security Laboratory - Carnegie Mellon University
- USENIX Symposium on Usable Privacy and Security
- NIST Usable Cybersecurity - U.S. National Institute of Standards and Technology
- Information Security Research - University College London
- Promotional tone, editorializing and other words to watch
- Vague, generic, and speculative statements extrapolated from similar subjects
- Essay-like writing
- Hallucinations (plausible-sounding, but false information) and non-existent references
- Close paraphrasing
Please address these issues. The best way is usually to read reliable sources and summarize them, instead of using a large language model. See our help page on large language models.