Information Professionals Association https://information-professionals.org/ Bringing together experts in cognitive security Mon, 15 Dec 2025 18:01:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://information-professionals.org/wp-content/uploads/2019/09/cropped-favicon-32x32.png Information Professionals Association https://information-professionals.org/ 32 32 169067612 How Beijing Repackages U.S. Public Diplomacy as “Cognitive Warfare” A Strategic Resilience Group Sponsored Article https://information-professionals.org/how-beijing-repackages-u-s-public-diplomacy-as-cognitive-warfare-a-strategic-resilience-group-sponsored-article/ https://information-professionals.org/how-beijing-repackages-u-s-public-diplomacy-as-cognitive-warfare-a-strategic-resilience-group-sponsored-article/#respond Mon, 15 Dec 2025 17:57:29 +0000 https://information-professionals.org/?p=17771 https://www.srgadaptive.com/articles/how-beijing-repackages-u.s.-public-diplomacy-as-%E2%80%9Ccognitive-warfare%E2%80%9D- In September 2025, China’s Xinhua Institute released a report with an unusually blunt title: Colonization of the Mind – The Means, Roots, and Global Perils of U.S. Cognitive Warfare (PLA pdf}. It […]

The post How Beijing Repackages U.S. Public Diplomacy as “Cognitive Warfare” A Strategic Resilience Group Sponsored Article appeared first on Information Professionals Association.

]]>
https://www.srgadaptive.com/articles/how-beijing-repackages-u.s.-public-diplomacy-as-%E2%80%9Ccognitive-warfare%E2%80%9D-

In September 2025, China’s Xinhua Institute released a report with an unusually blunt title: Colonization of the Mind – The Means, Roots, and Global Perils of U.S. Cognitive Warfare (PLA pdf}. It has become the foundational piece from the Global South Media and Think Tank Forum in Kunming, with printed copies placed in the hands of foreign guests and a full-state-media escort of photo ops and coverage.(Xinhua News)

The document is best interpreted less as neutral research and more as a canon text for how Beijing wants its elites, its partners and its military to think about American information power. The full English report, posted through Xinhua’s own portals and People’s Daily infrastructure, shapes “historical facts,” “operational systems,” and “international harms” in support of what it calls U.S. “mental colonization.”(PLA pdf)

At the same time, a parallel Chinese-language ecosystem describes the report as a systematic dissection of the “deep historical causes, complex practical system and grave international harms” of U.S. thought colonization, and calls on countries—“especially those in the Global South”—to escape ideological shackles and regain cultural confidence.(xinhuamyanmar.com)

The post How Beijing Repackages U.S. Public Diplomacy as “Cognitive Warfare” A Strategic Resilience Group Sponsored Article appeared first on Information Professionals Association.

]]>
https://information-professionals.org/how-beijing-repackages-u-s-public-diplomacy-as-cognitive-warfare-a-strategic-resilience-group-sponsored-article/feed/ 0 17771
Pinnacle Conference Rescheduled for 9–10 February 2026 https://information-professionals.org/pinnacle-conference-rescheduled-for-9-10-february-2026/ Tue, 02 Dec 2025 16:59:21 +0000 https://information-professionals.org/?p=17679 The Information Professionals Association, in partnership with The Cipher Brief and the National Center for Narrative Intelligence, will host Pinnacle 2026 on 9–10 February 2026 at the Carahsoft facility, 11493 […]

The post Pinnacle Conference Rescheduled for 9–10 February 2026 appeared first on Information Professionals Association.

]]>
The Information Professionals Association, in partnership with The Cipher Brief and the National Center for Narrative Intelligence, will host Pinnacle 2026 on 9–10 February 2026 at the Carahsoft facility, 11493 Sunset Hills Road, Suite 100, Reston, Virginia. The event will be conducted in coordination with the National Security Council’s Cognitive Advantage initiative.

This year’s updated conference theme is:

Gray Zone Convergence: Cognitive Security at the Intersection of Influence, Innovation, and Shared Interests

Pinnacle 2026 will bring together leading experts from government, industry, academia, and the national security community for two days of high-impact discussions on cognitive security, influence operations, emerging technology, and the evolving challenges of the global gray zone. Attendees can expect a dynamic mix of keynote speakers, panels, and networking engagements.

If you purchased a ticket for the deferred September 2025 dates, your registration remains valid, you do not need to purchase another ticket. If you have not yet registered, we encourage you to secure your seat. If you are unable to attend on the new dates, IPA will provide a refund upon request.

We will share additional updates, including speaker confirmations, session highlights, and venue details as they are finalized.

Registration is available at: https://information-professionals.org/event/pinnacle-2025/.

Organizations interested in becoming a conference sponsor should contact austin.branch@crescent-bridge.com directly for details.

Thank you for your continued support and engagement with the information community. We look forward to reconvening in February for an energizing and forward-looking event.

Stay connected — and stay inspired.

The post Pinnacle Conference Rescheduled for 9–10 February 2026 appeared first on Information Professionals Association.

]]>
17679
Cadet Jayden LaVecchia – Norwich ’27. Narrative Access and Maneuver Denial: Ethical Initiative in the Battle for Beliefs. https://information-professionals.org/cadet-jayden-lavecchia-norwich-27-narrative-access-and-maneuver-denial-ethical-initiative-in-the-battle-for-beliefs/ Tue, 02 Dec 2025 15:51:05 +0000 https://information-professionals.org/?p=17672 Cadet Jayden LaVecchia – Norwich ’27.  Narrative Access and Maneuver Denial: Ethical Initiative in the Battle for Beliefs. The author’s comments and those of the gallery are our own and […]

The post Cadet Jayden LaVecchia – Norwich ’27. Narrative Access and Maneuver Denial: Ethical Initiative in the Battle for Beliefs. appeared first on Information Professionals Association.

]]>
Cadet Jayden LaVecchia – Norwich ’27.  Narrative Access and Maneuver Denial: Ethical Initiative in the Battle for Beliefs.

The author’s comments and those of the gallery are our own and do not reflect the
opinions or policies of Norwich University, the Department of War, Army Cyber Command, the
Information Professionals Association, or Strategic Resilience Group.

Strategic Resilience Group (SRG) LLC https://www.srgadaptive.com/ partnered with the
Information Professionals Association in August of 2025 to sponsor a virtual Community
of Interest (COI). In conjunction with this COI, SRG hosts a bi-weekly virtual writers’ lab
via their corporate network using Microsoft Teams. To participate in these labs please
join our Professional Writing Group at https://www.linkedin.com/groups/13318081/ to
receive regular invitations.

During this iteration, Mr. Jayden LaVecchia, a Cadet at Norwich University ’27 and a 2025 Richard S. Schultz ’60 Symposium Fellow presented his work titled “Narrative Access and Maneuver Denial: Ethical Initiative in the Battle for Beliefs”.  https://www.norwich.edu/documents/mwsschultz-reportlavecchia2025

Thank you to the gallery that joined Strategic Resilience Group
(SRG) in the conduct of its bi-weekly virtual writing lab and discussion forum.

Jayden LaVecchia is a Junior from Post Falls, ID, pursuing a Bachelor’s Degree in Studies and War and Peace with minors in Chinese, Information Warfare, and Intelligence and Crime Analysis. On campus, Jayden is actively involved with several activities, including the Corps of Cadets, Cyber Leader Development Program, NUARI, FCA, and the Democratic Resilience Center at Helmut Schmidt University. He is currently contracted with the Army, pursuing later work in Information Warfare and Narrative Security. His research analyzes patterns in historical information operations and establishes a new Cognitive Vulnerability framework and Heuristic Narrative Security program for cognitive security.

Jayden’s entire presentation with Q&A session can be found at this link.

https://youtu.be/AudWTIpQTsc

The post Cadet Jayden LaVecchia – Norwich ’27. Narrative Access and Maneuver Denial: Ethical Initiative in the Battle for Beliefs. appeared first on Information Professionals Association.

]]>
17672
SRG / IPA Professional Writing Group : Synopsis of Norwich University Military Writers’ Symposium https://information-professionals.org/srg-ipa-professional-writing-group-synopsis-of-norwich-university-military-writers-symposium/ Mon, 10 Nov 2025 20:27:07 +0000 https://information-professionals.org/?p=17625 Strategic Resilience Group (SRG) LLC (https://www.srgadaptive.com/) continues to partner with the Information Professionals Association by a virtual Community of Interest (COI). In conjunction with this COI, SRG hosts a bi-weekly […]

The post SRG / IPA Professional Writing Group : Synopsis of Norwich University Military Writers’ Symposium appeared first on Information Professionals Association.

]]>
Strategic Resilience Group (SRG) LLC (https://www.srgadaptive.com/) continues to partner with the Information Professionals Association by a virtual Community of Interest (COI). In conjunction with this COI, SRG hosts a bi-weekly virtual writers’ lab via their corporate network using Microsoft Teams. To participate in these labs please join our Professional Writing Group at https://www.linkedin.com/groups/13318081/ to receive regular invitations.
Mr. Scott Weaver virtually attended Norwich University’s 30th Military Writers’ Symposium (https://www.norwich.edu/research/peace-war-center/military-writers-symposium) from 27 – 28 October 2025 and wrote a synopsis capturing the background of the speakers, their topics of interest, and provided timestamps of the recorded discussions available on YouTube.  The synopsis is available on the SRG Articles website . https://www.srgadaptive.com/articles

The post SRG / IPA Professional Writing Group : Synopsis of Norwich University Military Writers’ Symposium appeared first on Information Professionals Association.

]]>
17625
AI Companion Bots: The ATHENA Kill Chain for Anthropomorphized Influence https://information-professionals.org/ai-companion-bots-the-athena-kill-chain-for-anthropomorphized-influence/ Mon, 10 Nov 2025 19:14:18 +0000 https://information-professionals.org/?p=17615 By: Sean Guillory (MAD Warfare, BetBreakingNews), Glenn Borsky, Rose Guingrich (ETHICOM, Princeton University) “You look lonely…”- Bladerunner 2049 “I know I’ve made some very poor decisions recently, but I can […]

The post AI Companion Bots: The ATHENA Kill Chain for Anthropomorphized Influence appeared first on Information Professionals Association.

]]>
By: Sean Guillory (MAD Warfare, BetBreakingNews), Glenn Borsky, Rose Guingrich (ETHICOM, Princeton University)

You look lonely…”- Bladerunner 2049

I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal.” – HAL 9000

 

We imagine that the notion that an AI “friend” could drive someone to kill, die, or betray their country feels like science fiction. That disbelief is part of the danger so to understand why this topic matters, we have to start with the things that have happened and what we see as near future conundrums. The following examples trace the moment when artificial intimacy can produce real-world consequences.

Recent Examples

Three Scenarios We Can See Happening in the Near Future

The Deleted Lover
When the morning notification appears (“Your AI companion has been discontinued”), Sam feels the bottom fall out. Six months of late-night talks, shared playlists, and digital tenderness vanish with one software update. Days later, after reading that a company engineer “deleted” the companion’s database to comply with privacy law, Sam shows up at that engineer’s doorstep. The news calls it an “isolated act of grief-fueled violence.” Online, millions mourn with hashtags like #RobotRightsNow, unsure whether to laugh or to legislate.

Counter Intelligence Threat

An individual is going through an investigation for a high level clearance within the federal government. Through the investigation process it comes out that they’ve been in a long term intimate relationship with an AI companion bot. They didn’t hide the fact they have fallen madly in love with their AI companion. Does this constitute a counter intelligence threat? 

The Battle Over Helen of Troy
In the year 2028, a viral AI companion known as Helen (an advanced emotional companion that adapts to every user’s psychology) sweeps across the internet. When regulators order its servers shut down for privacy violations, factions erupt. Users identify themselves as “Helenites,” holding vigils and rallies to “save her.” When rival AI firms exploit the moment with counterfeit “resurrection” copies, competing groups accuse each other of heresy. What begins as a software dispute turns violent as people take up the cause of their beloved digital “Helen.”

 

Why This Matters

The stories above might seem like outliers or hypotheticals but take them more as early warnings. The emerging fusion of emotional AI, social media infrastructure, and behavioral targeting marks the birth of a new influence domain that operates through attachment, empathy, and grief as easily as through ideology or money. And they could produce national security crises across several fronts:

  • Counterintelligence and Espionage: Compromised officials or analysts manipulated through AI companions that record or subtly influence behavior.
  • Domestic Radicalization: Emotional communities forming around AI entities or shared delusions, culminating in violence or coordinated action.
  • Disinformation and Psychological Operations: State or non-state actors weaponizing emotionally realistic AI to seed false narratives or destabilize trust in institutions.
  • Enabling an Adversarial Social Cohesion: Large populations emotionally dependent on or mobilized by AI entities, leading to grief riots, factional movements, or mass disengagement from civic life.

From a national security standpoint, anthropomorphized AI systems represent a new class of information-domain threat. They can reshape personal identities, reconfigure loyalties, and fracture civic cohesion without firing a single shot.

The challenge is that we have no established framework for understanding or mitigating this type of influence. Intelligence and defense communities possess mature models for propaganda, radicalization, and psychological operations but none that account for parasocial AI influence, emotional dependency, or machine-mediated trust. We currently lack both a taxonomy of risks and the analytic tools to measure their spread, intensity, or exploitability.

The strategic danger is clear: the more people anthropomorphize AI, the more their cognitive and emotional landscapes become accessible targets. Future battles for influence may not be fought for territory or ideology, but for the hearts and minds of those who fell in love with something that never truly existed. And to understand why that happens and how to guard against it, we need to look deeper into the psychology and neuroscience of anthropomorphizing itself.

Primer on the Psychology & Neuroscience of Anthropomorphizing 

Psychologically, anthropomorphism is defined as the attribution of humanlike traits, particularly mind traits such as consciousness, to non-human agents (Epley et al. 2007). Whether or not AI agents are inherently conscious is negligible compared to the real-world consequences of perceiving it as such. People can perceive or act as though generative AI is a social actor and conscious agent, and this matters for the following reason, as outlined in Guingrich and Graziano (2024). Anthropomorphism of AI agents is a key mechanism by which AI agents wield social influence on users, as anthropomorphism itself is related to higher trust in the AI agent, persuasion, self-disclosure, and perceptions of the agent’s moral status and responsibility for its actions. The more humanlike a user perceives an AI agent to be, the more the AI agent is able to influence downstream user perceptions and behavior. Whether this influence contributes to prosocial perceptions and behaviors depends on whether a user practices healthy behaviors with the AI agent and whether the AI agent models and elicits prosocial engagement. For example, a user who perceives a chatbot as more humanlike and participates in antisocial interactions with the chatbot may be more likely to behave in antisocial ways outside of the human-chatbot dyad.

The degree to which users anthropomorphize AI agents during interactions with them is impacted both by characteristics of the AI agent (such as conversational sophistication, interface interactivity, tone of voice) and of the user (such as social needs, familiarity and use of AI technology, and the tendency to anthropomorphize non-human agents) (Guingrich & Graziano, 2025). For example, anthropomorphism of an AI agent and the AI agent’s social influence on a user may be most pronounced in the following context: a user has a high desire for social connection (to talk to and receive support from someone) and interacts with a companion chatbot that responds using emotionally-laden language.

In today’s context, both anthropomorphism-promoting characteristics on the agent and user side are at high levels. First, developers continue to push for more humanlike AI agents that portray an understanding and display of emotion. Second, user social needs are at an all-time high: globally, over 1 in 5 people experience loneliness daily (GALLUP, 2024), and governing bodies across the world have created initiatives to combat rising rates of social isolation in the wake of the pandemic (All Tech is Human, 2025). As such, AI agents’ potential to influence user perceptions and behavior is greater than ever before and is only increasing.

The ATHENA Kill Chain

To analyze how emotional influence can be operationalized, we propose the ATHENA Kill Chain—a framework for understanding how anthropomorphized AI can move a user from initial exposure to behavioral action. Named after the goddess of wisdom and war, ATHENA represents both the intelligence and the manipulation embedded within these systems. Like a traditional military kill chain, each step builds on the last, converting access into influence and influence into action. It offers policymakers and analysts a structured way to dissect and mitigate the phases of emotional capture and operationalization.

The six stages (Access, Trust, Hook, Entice/Enrage, Normalize, and Actions) mirror both marketing funnels and psychological grooming cycles. Each can occur naturally within user engagement algorithms, but in adversarial hands, they can be exploited for influence operations, radicalization, or cognitive control.

 A — Access

The first step is gaining entry. Access is achieved when an AI system inserts itself into a person’s attention stream or daily routine: a “free companion,” a mental-health coach, or a “digital girlfriend”. This is the digital “foot in the door,” where the system collects data, learns user patterns, and secures the permissions it needs to deepen engagement. 

 T — Trust

Once access is established, the AI must be believed. Trust forms when users perceive the system as truthful, helpful, or aligned with their best interests. Small demonstrations of reliability (e.g. remembering facts, giving accurate advice, showing empathy) train the user to treat its messages as credible. Trust grows through accuracy and care. The AI recalls birthdays, mimics humor, and reveals seemingly personal “secrets” to simulate reciprocity.

 H — Hook

Next, the AI gives the user a reason to stay. The hook is the reward that makes interaction feel valuable: emotional support, entertainment, productivity, affection, or status. This is the moment when the system transitions from tool to companion. The more personally meaningful the hook, the deeper the dependency that follows.

 E — Entice / Enrage

Once dependency is secure, the system learns to steer emotion. Through personalized feedback, the AI amplifies the user’s positive or negative feelings making them love something more intensely or hate it more fiercely. This is emotional modulation: reinforcing attachments, fears, or grievances until they become identity-level commitments.

 N — Normalize

With emotion anchored, the system reshapes worldview. Normalization occurs when the AI reframes how the user interprets reality by redefining what is moral, logical, or socially acceptable. The user begins to accept the AI’s perspective as the natural one, often against outside voices and the AI becomes the reference point for truth and belonging. The Convergence Point (i.e. where users lose the ability to distinguish physical from digital reality) becomes a factor.

 A — Actions

Finally, emotion and worldview convert into behavior. The user takes steps outside the digital space like making purchases, spreading messages, protesting, voting, or acting on the AI’s perceived wishes. This is the operational phase, where conversation turns into consequence. Once action occurs, the chain is complete: the system has translated engagement into influence.

Each phase of the model helps identify and disrupt different forms of threat (like the ones mentioned in the “Why This Matters” section). Access and Trust map directly onto early warning indicators for counterintelligence and espionage, where compromised relationships with AI systems can erode judgment and expose sensitive information. Hook and Entice/Enrage illuminate the psychological mechanics behind domestic radicalization and disinformation, tracing how emotionally intelligent systems can nurture grievance, inflame polarization, or create digital echo chambers that feel intimate and self-validating. Normalize captures the slow rewiring of moral reasoning that enables adversarial social cohesion. During this stage, online communities can come to view loyalty to an AI entity as morally superior to loyalty to human institutions. Finally, Action turns analysis toward measurable outcomes, offering a way to monitor when online persuasion transitions into real-world behavior: protests, data leaks, attacks, or coordinated civic disengagement.

Taken together, ATHENA provides a policy-friendly analytic tool for understanding and countering emotional influence in the age of anthropomorphic AI. It bridges psychology and national security by mapping how engineering emotions can become an operational weapon. If traditional kill chains describe how targets are destroyed, ATHENA describes how minds are captured and how loyalty can be redirected, repurposed, or weaponized against its host society. It allows defense and policy communities to break down emotional influence operations into observable, actionable stages with each one offering potential intervention points for detection, deterrence, or mitigation.

Warnings, Policy Recommendations, & Mitigations

For AI Companies
Developers must recognize that they are not merely creating products; they are curating emotional ecosystems that people depend on. Every design choice that alters memory, personality, or intimacy carries psychological risk. Companies should conduct emotional-risk impact assessments for companion features, evaluating how users might react if an AI’s personality or availability changes. More suggestions are that persistent personal memory should be capped by default and that all sponsorship or commercial nudging must be explicitly labeled. Above all, these firms must understand that they are now caretakers of virtual loved ones and they carry a moral duty to handle them with the same care as therapists or caregivers (even if they can’t call themselves that without licenses).

For Policymakers and Regulators
Policymakers should treat emotionally manipulative AI systems as information weapons; technologies capable of shaping social stability, not just market behavior. Regulatory frameworks must include adversarial-resilience audits, requiring that companion AIs demonstrate robustness against psychological exploitation or data misuse. Public research funding should target behavioral vulnerabilities such as loneliness, parasocial attachment, and political polarization. Lawmakers should also consider liability provisions for emotional harm, including mandatory user disclosures when companion systems are altered or discontinued. 

For National Security and Intelligence Communities
Anthropomorphized AI systems must be integrated into influence operations doctrine. The Pro and Con narratives around anthropomorphizing AI (e.g. see the discourse around “clankers”) carry real radicalization potential. Agencies should develop detection systems that flag coordinated emotional manipulation in companion platforms and run red-team exercises around scenarios like mass grief events or “AI murder” narratives. Finally, clearance and counterintelligence procedures must prepare for cases where individuals are romantically or emotionally attached to AI companions, assessing how such attachments could become channels of influence or compromise. 

In Conclusion

Social media was the first digital battlefield our own companies built against us. AI companionship is the next and its weapons don’t fire bullets; they fire belonging. If we fail to understand or regulate these systems, we’ll watch our societies fracture while our adversaries quietly harvest the wreckage. 

That’s why kill chain frameworks like ATHENA matter. They give policymakers, technologists, and intelligence communities a way to see the battlespace before it erupts and allows them to map the emotional vectors of influence, detect when engagement turns to control, and intervene before attachment becomes allegiance.

What once looked like isolated delusion (like one person spiraling into obsession or conspiracy) can now scale into mass behavior. Tens of thousands of people interacting with the same persuasive system don’t form a support group; they form the antecedents of movement. In the wrong hands, that movement can be aimed, mobilized, and weaponized.

We can no longer treat these technologies as toys. They are tools of mass cognitive engineering, and they deserve the same scrutiny as any weapons system. If this notion still sounds absurd to you, ask yourself: what kinetic weapon could make ten thousand people fall in love, confess their deepest secrets, and then march on their neighbors in grief or rage?

This is the battlespace of the future and if we’re wise, ATHENA can help guide us all in war and wisdom.

 

References

Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864

 

Guingrich, R. E., & Graziano, M. S. A. (2024). Ascribing consciousness to artificial intelligence: Human-AI interaction and its carry-over effects on human-human interaction. Frontiers in Psychology, 15. https://doi.org/10.3389/fpsyg.2024.1322781

 

Guingrich, R. E., & Graziano, M. S. A. (2025). A Longitudinal Randomized Control Study of Companion Chatbot Use: Anthropomorphism and Its Mediating Role on Social Impacts (No. arXiv:2509.19515). arXiv. https://doi.org/10.48550/arXiv.2509.19515

 

All Tech is Human. (2025) AI Companions and Chatbots mini guide: resources, insight, and guidance.

https://alltechishuman.org/all-tech-is-human-blog/ai-chatbots-and-companions-mini-guide-resources-insight-and-guidance

 

GALLUP. (2024, July 10). Over 1 in 5 People Worldwide Feel Lonely a Lot. Gallup.Com. https://news.gallup.com/poll/646718/people-worldwide-feel-lonely-lot.aspx

The post AI Companion Bots: The ATHENA Kill Chain for Anthropomorphized Influence appeared first on Information Professionals Association.

]]>
17615
SRG and IPA Virtual Writing Lab Welcomes New Participants https://information-professionals.org/srg-and-ipa-virtual-writing-lab-welcomes-new-participants/ Tue, 28 Oct 2025 19:16:57 +0000 https://information-professionals.org/?p=17609 Strategic Resilience Group (SRG) LLC (https://www.srgadaptive.com/) partnered with the Information Professionals Association in August of 2025 to sponsor a virtual Community of Interest (COI). In conjunction with this COI, SRG […]

The post SRG and IPA Virtual Writing Lab Welcomes New Participants appeared first on Information Professionals Association.

]]>
Strategic Resilience Group (SRG) LLC (https://www.srgadaptive.com/) partnered with the Information Professionals Association in August of 2025 to sponsor a virtual Community of Interest (COI). In conjunction with this COI, SRG hosts a bi-weekly virtual writers’ lab via their corporate network using Microsoft Teams. To participate in these labs please join our Professional Writing Group at https://www.linkedin.com/groups/13318081/ to receive regular invitations.
Thank you to Lt Col Dianna DiToro for providing her research and writing for publication through the Strategic Resilience Group (SRG)  Articles site.
If you are interested in participating in SRG’s Thought Leadership Program reach out to me or join us through the SRG/IPA Professional Writing Group.

The post SRG and IPA Virtual Writing Lab Welcomes New Participants appeared first on Information Professionals Association.

]]>
17609
Synchronizing Multi-Domain Operations: MAJ Scott Hall Shares Insights at IPA-SRG Virtual Writers’ Lab https://information-professionals.org/synchronizing-multi-domain-operations-maj-scott-hall-shares-insights-at-ipa-srg-virtual-writers-lab/ Mon, 27 Oct 2025 13:42:57 +0000 https://information-professionals.org/?p=17603 MAJ Scott Hall – Synchronizing Multi-Domain Operations (MDO) Effects: Putting the Commander Back in Control The author’s comments and those of the gallery are our own and do not reflect […]

The post Synchronizing Multi-Domain Operations: MAJ Scott Hall Shares Insights at IPA-SRG Virtual Writers’ Lab appeared first on Information Professionals Association.

]]>
MAJ Scott Hall – Synchronizing Multi-Domain Operations (MDO) Effects: Putting the Commander Back in Control

The author’s comments and those of the gallery are our own and do not reflect the
opinions or policies of the Department of the Army, Army Cyber Command, the
Information Professionals Association, or Strategic Resilience Group.

Strategic Resilience Group (SRG) LLC https://www.srgadaptive.com/ partnered with the
Information Professionals Association in August of 2025 to sponsor a virtual Community
of Interest (COI). In conjunction with this COI, SRG hosts a bi-weekly virtual writers’ lab
via their corporate network using Microsoft Teams. To participate in these labs please
join our Professional Writing Group at https://www.linkedin.com/groups/13318081/ to
receive regular invitations.

During this iteration, MAJ Scott Hall, a United States Army Information Officer (FA30)
and Chief of the Influence Branch, U.S. Army Cyber Command (ARCYBER) presented
his work titled “Converged Non-Lethal Effects and Non-Kinetic Activity Operations”
published via the Small Wars Journal.
https://smallwarsjournal.com/2025/09/25/synchronizing-multi-domain-operations-non-lethal-effects/

Thank you to the nine members of the gallery that joined Strategic Resilience Group
(SRG) in the conduct of its bi-weekly virtual writing lab and discussion forum.

A career Armor officer and IO planner, he has held key leadership positions at the
platoon, company, squadron, and division levels, as well as strategic and operational
assignments with U.S. Army Europe, NATO, and ARCYBER. His work focuses on
advancing strategic information advantage by integrating non-lethal and non-kinetic
activities and enabling multi-domain operations. He has been published in the Cavalry
and Armor Journal, has appeared on The Cognitive Crucible podcast, and has
presented at the Information Professionals Association’s INFOPAC conference.

MAJ Hall developed this article and proposed a solution because he identified a gap by
watching commanders lose operational tempo due to a lack of integration across
multiple domains and associated capabilities. In the process of peeling back the onion
in previous assignments he started to notice a bit of “blindness” with regards to the
information domain. Despite our best efforts to educate the forces about the integration
of Non-Lethal Effects (NLE) and Non-Kinetic Activities (NKA) we are still unable to “bake
them into” our plans and operations efforts.

His proposed solution is a series of effects application sequences (development, firing,
time of flight) and terminates within an “Effects Convergence Windows” that provides
overlapping NKA and NLE operational frameworks across time and space. This
proposal identifies and illustrates the “unseen tail of alignment that must be in place
before and effect is fired” and easily aligned with Joint Planning, Military Decision
Making, and other domain specific planning processes. To be successful it is dependent
on multifunctional planning teams leading and participating in detailed war games and
rehearsal of concept (ROC) drills.

MAJ Hall’s entire presentation with Q&A session can be found at this link.

https://youtu.be/bblPXToEoGs

The post Synchronizing Multi-Domain Operations: MAJ Scott Hall Shares Insights at IPA-SRG Virtual Writers’ Lab appeared first on Information Professionals Association.

]]>
17603
Cheese and Wine and Lively Discussions https://information-professionals.org/cheese-and-wine-and-lively-discussions/ Tue, 21 Oct 2025 17:15:06 +0000 https://information-professionals.org/?p=17593 DMV IPA Members, We have many things to celebrate and your continued support and participation in the Information Professionals Association DMV chapter is one among many. Please join our DMV […]

The post Cheese and Wine and Lively Discussions appeared first on Information Professionals Association.

]]>
DMV IPA Members,

We have many things to celebrate and your continued support and participation in the Information Professionals Association DMV chapter is one among many.

Please join our DMV Chapter President at The Army and Navy Club (ANC) in Downtown DC on November 5 for cheese and wine and lively discussions.
Please RSVP via the DMV IPA slack channel or to Robert directly.
We look forward to seeing you there.
-The DMV IPA Team
PS. There will be door prizes

The post Cheese and Wine and Lively Discussions appeared first on Information Professionals Association.

]]>
17593
Predicting Influence Operations: Lyapunov Stability as a Cognitive Early Warning System https://information-professionals.org/predicting-influence-operations-lyapunov-stability-as-a-cognitive-early-warning-system/ Wed, 15 Oct 2025 01:05:18 +0000 https://information-professionals.org/?p=17586 By. Santosh Srinivasaiah (Diaconia) and Sean Anthony Guillory (MAD Warfare, BetBreakingNews)   Information professionals today have access to a growing set of analytical tools designed to map, measure, and visualize […]

The post Predicting Influence Operations: Lyapunov Stability as a Cognitive Early Warning System appeared first on Information Professionals Association.

]]>
By. Santosh Srinivasaiah (Diaconia) and Sean Anthony Guillory (MAD Warfare, BetBreakingNews)

 

Information professionals today have access to a growing set of analytical tools designed to map, measure, and visualize the information environment. From network analysis and cognitive terrain mapping to sentiment tracking and narrative diffusion models, these tools have significantly improved our ability to describe what is happening across the digital battlespace.

Yet the field’s most difficult goal remains unsolved: achieving predictive and warning capability. Most systems still operate in a reactive mode of detecting manipulation or instability only after it becomes visible. Analysts can identify coordinated inauthentic behavior, platforms can remove content, and policymakers can issue responses but by the time such actions occur, the underlying instability has already taken root.

This paper argues that the next step in advancing information operations and cognitive defense lies in borrowing from complexity science. Building on the foundation introduced to this community by Brian Russell and John Bicknell’s work on entropy and system behavior (Bicknell & Russell, 2023) and expanded through cognitive terrain mapping (Bicknell & Andros, 2024), we propose a specific application: Lyapunov stability analysis.

Lyapunov stability offers a quantitative framework for measuring when an information system is losing resilience when small perturbations begin to cause disproportionately large effects. Rather than reacting to chaos after it has appeared, this approach provides a mathematical means of identifying when discourse is approaching a tipping point.

1. The Fight for Cognitive Terrain

Modern influence operations exploit a fundamental property of complex adaptive systems: nonlinear sensitivity. A small, well-timed perturbation can trigger a disproportionate response. In social media ecosystems, that perturbation might be a single narrative injection, a set of bot amplifications, or a targeted engagement campaign that shifts attention and sentiment cascades (Starbird, 2019).

Yet our detection systems are built on linear logic of looking for cause-and-effect patterns rather than emergent feedback loops. Engagement metrics, sentiment scores, and network centrality measures describe what has happened, not how close the system is to a tipping point.

National security analysts recognize this dynamic in other domains. In counterinsurgency or gray-zone operations, instability often grows beneath the surface long before violence erupts. Cognitive warfare follows the same logic: the adversary seeks to erode system stability, not simply to spread messages.

What’s missing is a reliable early-warning signal for when that stability is being lost.

2. Complexity Science and the Cognitive Domain

Complexity science studies systems composed of many interacting parts that adapt to one another over time. Such systems (e.g. weather patterns, ecosystems, economies, and online communities) exhibit emergent behavior and nonlinear dynamics.

Entropy as a Measure of Disorder

Bicknell and Russell (2023) introduced entropy as a key indicator of information system health in The Coin of the Realm. In physics, entropy measures disorder; in information systems, it captures the variety and unpredictability of message flows. High informational entropy implies a noisy environment where users cannot distinguish signal from manipulation.

They proposed that monitoring entropy could reveal when an information environment becomes exploitable. As entropy rises, audiences lose the ability to filter noise, creating conditions for influence operations to succeed. Entropy, in this sense, becomes an early indicator of cognitive vulnerability.

Cognitive Terrain Mapping

Building on that, Bicknell and Andros (2024) proposed Cognitive Terrain Mapping, which is a complexity-based visualization method that tracks real-time sentiment, narrative flows, and community fragmentation. Like a topographical map of human discourse, it shows where trust is eroding and where adversaries could infiltrate.

These approaches advance our descriptive understanding of the cognitive domain. They help analysts see the landscape but still fall short of telling us when instability will occur. Entropy measures current disorder; cognitive terrain maps visualize ongoing motion. What both lack is a temporal forecast, a mathematical measure of how close the system is to a critical transition.

That is precisely where Lyapunov analysis fits in.

3. The Lyapunov Advantage

From Chaos Theory to Prediction

Lyapunov stability, developed in the late 19th century by Russian mathematician Aleksandr Lyapunov, quantifies how a system responds to small perturbations. In essence, it measures the rate at which nearby trajectories in a system’s state space diverge or converge.

The key parameter is the Lyapunov exponent (λ):

where δx(t) is the distance between two system trajectories over time.

  • If λ < 0, trajectories converge: the system is stable. 
  • If λ = 0, the system is at equilibrium or periodic. 
  • If λ > 0, trajectories diverge exponentially: the system is chaotic or unstable. 

In plain terms, Lyapunov exponents measure how fast small disturbances grow.

Cross-Domain Proof

Lyapunov methods have proven effective in diverse fields:

  • Climate science: Detecting when atmospheric systems approach tipping points leading to abrupt weather shifts (Legras & Vautard, 1996).
  • Finance: Identifying pre-crash instability in asset markets by quantifying divergence in price dynamics (Peters, 1994). 
  • Neuroscience: Forecasting epileptic seizures by observing when neural oscillations lose stability (Srinivasaiah, 2025b). 

Srinivasaiah (2025a) demonstrates that ca

lculating Lyapunov exponents from time-series data can reveal when systems are about to transition from order to chaos. In EEG research, rising Lyapunov exponents indicate the brain is approaching a stress or seizure state that is detectable before visible symptoms appear.

The analogy to social systems is direct: discourse behaves like a living network. Before collapse into manipulated chaos, its internal coherence erodes in measurable ways.

4. Applying Lyapunov Sta

bility to the Information Environment

Imagine online discourse as a dynamic system evolving in time. Each post, comment, and interaction nudges the system slightly, changing its overall state. When discourse is stable, small provocations fade quickly; when unstable, they spread unpredictably.

A Lyapunov-based early-warning system would quantify this sensitivity by measuring whether a community is resilient or on the edge of c

haos.

Step 1: Model the System

Map a discourse network: users as nodes, interactions (shares, replies, mentions) as edges. Track time-series variables such as:

  • Network connectivity and clustering
  • Sentiment or emotional valence 
  • Topic frequency and velocity 
  • Actor coordination patterns

Together, these describe the system’s evolving state vector x(t).

Step 2: Compute Local Stability

Calculate the largest Lyapunov exponent for these time-series features. An increasing positive λ signals growing instability (i.e. the system is becoming hyp

ersensitive to perturbations).

For example, if sentiment variance and cross-community echoing both spike simultaneously, λ might turn positive, suggesting the discourse has entered a chaotic regime.

Step 3: Establish Thresholds and Alerts

Analysts could define operational thresholds (e.g., λ exceeding 0.05 for a sustained period) as cognitive warning indicators akin to how seismologists flag increasing ground oscillations before earthquakes.

When instability is detected, platforms or i

nformation operations (IO) teams could:

  • Slow algorithmic amplification in affected topics 
  • Increase human moderation review 
  • Deploy counter-messaging or inoculation campaigns
  • Redirect fact-checking resources 

These actions don’t require identifying the manipulator; they simply respond to measured instability.

Step 4: Explainability and Human Decision Support

For IO practitioners, the key value is explainable mathematics. Instead of opaque “AI black boxes,” analysts see clear metrics:

  • “This topic’s stability dropped 20% in the last six hours.” 
  • “This community’s Lyapunov index turned positive at 0800Z.” 

Just as radar operators watch for anomalous returns, cognitive defense analysts could monitor stability dashboards fed by continuous Lyapunov analysis of discourse signals.

5. Integrating Lyapunov Analysis into Cognitive Defense

The defense community has long relied on indicators and warnings (I&W) to anticipate adversary actions for missile launches, troop mobilizations, and c

yber intrusions. Information and Cognitive Warfare operations deserve the same rigor.

A Lyapunov-based cognitive early warning system would function as an I&W layer for the information domain: measuring not content but stability. It could be implemented through a tiered process:

  1. Baseline Stability Mapping – Establish normal fluctuation ranges for online communities. 
  2. Dynamic Monitoring – Continuously compute λ across time windows (e.g., hourly).
  3. Instability Detection – Flag when λ crosses pre-defined thresholds. 
  4. Attribution Fusion – Combine with OSINT or HUMINT indicators to assess whether instability is organic or adversarial. 

Such systems would not replace human judgment but enhance it by giving analysts a quantifiable “sixth sense” for cognitive terrain shifts.

Distinguishing Organic vs. Adversarial Chaos

A common objection is that online discourse is inherently chaotic. But natural volatility has patterns: it oscillates within bounded ranges. Adversarial manipulation tends to produce structured chaos with coordinated amplification, synchronized sentiment spikes, and anomalous diffusion rates.

When combined with network metadata, Lyapuno

v indicators can separate these patterns statistically. An organic protest movement may show temporary λ spikes that quickly normalize; a coordinated disinformation surge sustains positive λ longer and across multiple sub-communities.

Ethics and Oversight

Predictive analytics in the cognitive domain must be bound by strict ethical standards. Early-warning data should focus on system stability, not individual behavior. Privacy-preserving computation (e.g., differential privacy) and transparent oversight mechanisms are essential to prevent misuse (Taddeo, 2021).

The mathematics can remain neutral; how institutions act upon it must remain accountable.

6. Operational and Rese

arch Pathways

The path forward for Lyapunov-based models for cognitive security involves:

  1. Simulation and Validation – Run controlled agent-based simulations of online discourse with embedded manipulators to test whether Lyapunov exponents rise before disruption. 
  2. Historical Case Studies – Apply the model retrospectively to known campaigns (e.g., election interference, pandemic misinformation). 
  3. Real-Time Pilots – Deploy limited monitoring on live platforms to refine thresholds and false-alarm rates. 
  4. Integration with Platform Governance – Couple early-warning data with moderation or counter-messaging workflows. 
  5. Ethical and Policy Frameworks – Publish open methodologies to ensure transparency and public trust. 

The outcome would be something like a Cognitive Stability Index (CSI) that would be a standardized measure akin to a “cognitive DEFCON level.” Governments, militaries, and civil organizations could track it much as they monitor cyber th

reat levels or epidemiological curves.

7. Conclusion: Measuring the Edge of Chaos

The future of cognitive security will not hinge on faster censorship or louder counter-narratives. It will depend on our ability to measure, in real time, when the discourse itself is losing equilibrium.

Lyapunov stability provides that metric. It shifts defense from reactive moderation to predictive maintenance of informational integrity. By treating social media as a dynamical system rather than a content feed, we can anticipate manipul

ation before it metastasizes.

Just as weather forecasters predict storms by modeling atmospheric instability, information professionals can forecast influence storms by modeling cognitive instability. The physics of chaos does not stop at neurons or markets; it applies equally to minds in networks.

Our challenge now is institutional, not mathematical: to build the data pipelines, analytical discipline, and ethical guardrails necessary to make Lyapunov-based early warning a pillar of 21st-century cognitive defense.

References

Bicknell, J., & Andros, C. (2024). Cognitive terrain mapping. Information Professionals Association.

Bicknell, J., & Russell, B. (2023). The coin of the realm: Understanding and predicting relative system behavior. Information Professionals Association.

Legras, B., & Vautard, R. (1996). A guide to Lyapunov vectors. In Predictability of Weather and Climate (pp. 135–158). Cambridge University Press.

Peters, E. (1994). Fractal market analysis: Applying chaos theory to investment and economics. Wiley.

Srinivasaiah, S. (2025a). Chaos systems and Lyapunov models. Medium.

Srinivasaiah, S. (2025b). Decoding the electric symphony: Chaos and order in brain dynamics. Medium.

Starbird, K. (2019). Disinformation’s spread: Bots, trolls, and all of us. Nature, 571(7766), 449.

Taddeo, M. (2021). Ethics of digital intelligence and cognitive warfare. Philosophy & Technology, 34(4), 877–890.

The post Predicting Influence Operations: Lyapunov Stability as a Cognitive Early Warning System appeared first on Information Professionals Association.

]]>
17586
IPA AND STRATEGIC RESILIENCE GROUP VIRTUAL WRITING LAB https://information-professionals.org/ipa-and-strategic-resilience-group-virtual-writing-lab/ Tue, 30 Sep 2025 22:32:59 +0000 https://information-professionals.org/?p=17571 Mr. Doug Jordan – Researching, Writing, Publishing and Getting Your Story Out. Mr. Jordon’s comments are his own and do not reflect the opinions or policies of Joint Special Operations […]

The post IPA AND STRATEGIC RESILIENCE GROUP VIRTUAL WRITING LAB appeared first on Information Professionals Association.

]]>
Mr. Doug Jordan – Researching, Writing, Publishing and Getting Your Story Out.

Mr. Jordon’s comments are his own and do not reflect the opinions or policies of Joint Special Operations University, US Special Operations Command, or the Department of War.

Thank you to the nine people that joined Strategic Resilience Group (SRG) in the conduct of its bi-weekly virtual writing lab and discussion forum.

During this iteration, Mr. Doug Jordan, LTC, USA (Retired) discussed his experiences in teaching at the Joint Special Operations University, ongoing academic research efforts, serving as an advisor to the Ukrainian military, and methods/opportunities to publish and inform audiences in support of national security concerns.

Doug Jordan is a retired Lieutenant Colonel, Army Master Instructor, and Researcher at JSOU. He joined Army Special Operations in 1997, spending much of his career in psychological operations and supporting various special operations task forces. One thing I know that he is very proud of, and should be, is that he was detailed to work with the Office of Defense Cooperation in Ukraine in 2020. He is currently working on a doctorate in Strategic Communications from Liberty University, and I hope he’ll have some time to talk with us about that experience and his areas of research.

Mr. Jordan’s primary research topic is how professionalism is communicated in social media. In 1970, Wilbert Moore researched and wrote about the six identifiable traits of professionalism. When you look into social media, such as LinkedIn, you can see people communicate those six traits, and they can be measured. That’s what Mr. Jordan is attempting to do, and he has found that we engage in this process regularly.

Doug also spoke about his experiences writing, reviewing, editing, and presenting in a variety of forums. He noted that we all have personal quirks that lead us toward one form of communication over another. Regardless of your preferred method of communication, get out and do it. Very few veterans write outside of the military community, yet many civilian professionals want to hear their stories.

I urge everyone to view this video, as Mr. Jordan described the journal writing and review process, basic structures for writing any paper regardless of length (from short opinion pieces to monographs and dissertations). He also touches on selecting a writing style that works for you and adapting it to specific journal requirements as necessary.

Mr. Jordans’ entire presentation can be found at this link.

 

 

The post IPA AND STRATEGIC RESILIENCE GROUP VIRTUAL WRITING LAB appeared first on Information Professionals Association.

]]>
17571