Raspberry Pi used to hack NASA – Lack of basic security controls to blame.


Critical Path Security has spent quite a deal of time using Raspberry Pi devices for adversarially-based physical penetration tests, with the hope of compromising the client business network. It is one of the most successful tactics that are employed, as the devices are quite small and versatile. Using a battery pack and a small WiFi antenna, Critical Path Security has proven hundreds of vulnerabilities in some of the most well-defended networks in the world. This has led to increasing the security posture of our customers.

When we read the most recent breach report, there was little surprise to find that NASA Jet Propulsion Laboratory (JPL) was unable adhere to the NIST standards, including the 1st requirement which is "Inventory and Control of Hardware Assets". It is a vast, interconnected network of 26,174 computer systems with 3,511 being servers. With the extensive distribution of legacy systems, the ability to breach a device and move laterally through the network is much easier than with more traditional enterprise networks.

It was noted that several notable vulnerabilities were not addressed for an excess of 180 days. This was largely due to JPL administrators not have a clear understanding of their responsibilities and the systems they were tasked to manage and protect.

Understanding that an unauthorized Raspberry Pi was placed on the network, it is a likely scenario that multiple remote control mechanisms were used to pipe data around the network, such as Remote Secure Shell or Social Media Direct Messaging. Also, it has been outlined that proper network segmentation was not in place, which allowed the criminals to traverse the network gateway at JPL to other networks in the NASA global network. As the agency has significant connectivity with nongovernmental educational and research institutions it presents cybercriminals with a much larger target than most other agencies.

This was further compounded as NASA signed a 5-year management agreement with CalTech, of which neither side had agreed on several security IT requirements, leaving inconsistencies in security controls and response. CalTech is tasked with managing the overall JPL network, with some exclusions for mission-oriented systems. Significant failures have been documented as a result of this investigation.

One point of notable contention is that the JPL Security Operations Center (SOC) must provide indicators of compromise to the NASA SOC, but CalTech doesn't agree with the level of visibility that NASA would like to have into devices on the JPL network, citing privacy concerns. This has led to a large and obvious gap in coverage.

What we know, at this point, is that in April 2018 the criminals breached the agency's network and stole approximately 500 MB of data related to Mars missions. It is likely that additional high-value data was collected, but hasn't been disclosed. It is also likely that the size of this breach is inaccurate as JPL has stated that lack of visibility into their networked systems is a yet to be solved problem.

Unfortunately, breaches of the JPL network have become quite common.

In fact, there are many notable events of breach of JPL systems. Some of them including:

  • 2009 - A Cyber attacker breached the JPL network and transferred over 22 Gigabytes of data to a Chinese-based IP address.
  • 2011 - A Chinese-based IP address is associated with a breach with gained unlimited access to 18 servers tasked with supporting missions, including Deep Space and Advanced Spacebourne Thermal Emission and Reflection Radiometer. A total of 87 Gigabytes of data was stolen in this campaign.
  • 2016 - The abuse of encrypted SSL data permitted data to be transferred to an outside IP address without clear visibility of the JPL administrators
  • 2018 - A JPL account belonging to an external user was used to access the mission network. This attack was active for nearly a year. The investigation is ongoing.

The Raspberry Pi placed on the network wasn't immediately noticed, due to the lack of accurate information around systems components and assets. This prevented the organization from responding in a timely fashion. An interview with JPL administrators revealed that the Information Technology Security Database (ITSDB) was routinely not used as it was considered unreliable. Often, an administrator would add a device to the network with the intention of adding it to the ITSDB but would forget to report it when the system access was restored.

As the impact of the ongoing breaches has increased, so has the response. During the 2018 attack, the criticality was so high that Johnson Space Center severed gateway connections with JPL and those have yet to be fully restored, due to fears of lateral movement, breach, and lack of remediation of known vulnerabilities on JPL hosted systems.

As of 2019, 153 waivers for remediation are still open. 54 are granted to employees that no longer work at JPL, which means that sufficient documentation for the waiver request has likely been lost. It should also be noted that those waivers include the following:

  • Waive installation of antivirus software.
  • Waive monthly installation of patches.
  • Waive the requirement to change passwords every 90 days. As of today, Critical Path Security is aware of 10840 known JPL credential leaks, where both the username and password are readily available.
  • Waiver to allow system administrators to not delete user accounts.

Leak Sample

JPL SOC currently utilizes Splunk Enterprise Security (SE) for notification of security incidents, but processes to assure that all critical systems are reporting security logs to the instance aren't in place. However, it was also understood that once logs were forwarded from the system to Splunk ES, the system's administrator was not obligated to review the system for anomalies or malicious behavior. Making a bad situation worse, 8 of 11 system administrators claimed they were unable to review logs as they lacked access to Splunk ES.

As most Security Analysts relied on Indicators of Compromise (IOCs), they also lacked sufficient training to perform any contextualized threat hunting in the JPL network environments. As JPL does not fund the training or certification of their analysts or systems administrators, ongoing education doesn't commonly occur. Therefore, no meaningful or time-sensitive incident response plans or practices exist. All response is currently handled by the individual analyst's experience and that knowledge is not formally captured in a centralized repository.

It is recommended by the Office of Inspector General for NASA and JPL to develop and implement a routinely updated Incident Response Plan.

Of the 10 recommendations provided by the Office of Inspector General, NASA concurs with 9 of them. The non-concur item is related to the Threat Hunting Process, which NASA requests NIST to provide documented guidance.

Those recommendations are outlined below:

1. Require all system administrators to review and update the ITSDB to ensure all system
components are properly registered in the database, and require the JPL CITO to periodically
review the ITSDB for compliance with this requirement.

2. Segregate shared environments connected to the network gateway for all partners accessing
the JPL network and monitor partner activity when accessing the network.

3. Review and update ISAs for all partners connected to the network gateway to ensure they are
up-to-date and made available to the NASA OCIO.

4. Require the JPL CITO to identify and remediate weaknesses in the SPL ticket process and provide
periodic aging reports to the JPL CIO detailing the status of open SPL tickets, pending patches,
and outdated security waivers.

5. Require the JPL CITO to complete its validation and updates of open waivers, perform annual
reviews to ensure system representatives are validating the need for the waiver, and provide
NASA documentation of these waivers.

6. Clarify the division of responsibility between the JPL OCIO and system administrators for
conducting routine log reviews and monitor their compliance with this requirement on a more
frequent basis.

7. Implement the planned role-based training program by July 2019.

8. Establish a formal, documented threat-hunting process that includes roles and responsibilities,
standard processes for conducting a hunt, and metrics to track success.

9. Develop and implement a comprehensive strategy for institutional IT knowledge and incident
management that includes the dissemination of lessons learned to system administrators and
other appropriate personnel.

10. Include requirements in the pending IT Transition Plan for implementation of continuous
monitoring tools that provide the NASA SOC with oversight of JPL network security practices to
ensure they adequately protect NASA data, systems, and applications.

Following NIST's advice, all organizations should adhere to the following Framework for Improving Critical Infrastructure Cybersecurity.

The Identify Function assists in developing an organizational understanding to managing cybersecurity risk to systems, people, assets, data, and capabilities. Understanding the business context, the resources that support critical functions, and the related cybersecurity risks enables an organization to focus and prioritize its efforts, consistent with its risk management strategy and business needs.

  • Identifying physical and software assets within the organization to establish the basis of an Asset Management program
  • Identifying the Business Environment the organization supports including the organization's role in the supply chain, and the organizations place in the critical infrastructure sector
  • Identifying cybersecurity policies established within the organization to define the Governance program as well as identifying legal and regulatory requirements regarding the cybersecurity capabilities of the organization
  • Identifying asset vulnerabilities, threats to internal and external organizational resources, and risk response activities as a basis for the organizations Risk Assessment
  • Identifying a Risk Management Strategy for the organization including establishing risk tolerances
  • Identifying a Supply Chain Risk Management strategy including priorities, constraints, risk tolerances, and assumptions used to support risk decisions associated with managing supply chain risks

The Protect Function outlines appropriate safeguards to ensure delivery of critical infrastructure services. The Protect Function supports the ability to limit or contain the impact of a potential cybersecurity event.

  • Protections for Identity Management and Access Control within the organization including physical and remote access
  • Empowering staff within the organization through Awareness and Training including role based and privileged user training
  • Establishing Data Security protection consistent with the organization’s risk strategy to protect the confidentiality, integrity, and availability of information
  • Implementing Information Protection Processes and Procedures to maintain and manage the protections of information systems and assets
  • Protecting organizational resources through Maintenance, including remote maintenance, activities
  • Managing Protective Technology to ensure the security and resilience of systems and assists are consistent with organizational policies, procedures, and agreements

The Detect Function defines the appropriate activities to identify the occurrence of a cybersecurity event. The Detect Function enables timely discovery of cybersecurity events.

  • Ensuring Anomalies and Events are detected, and their potential impact is understood
  • Implementing Security Continuous Monitoring capabilities to monitor cybersecurity events and verify the effectiveness of protective measures including network and physical activities
  • Maintaining Detection Processes to provide awareness of anomalous events

The Respond Function includes appropriate activities to take action regarding a detected cybersecurity incident. The Respond Function supports the ability to contain the impact of a potential cybersecurity incident.

  • Ensuring Response Planning process are executed during and after an incident
  • Managing Communications during and after an event with stakeholders, law enforcement, external stakeholders as appropriate
  • Analysis is conducted to ensure effective response and support recovery activities including forensic analysis, and determining the impact of incidents
  • Mitigation activities are performed to prevent expansion of an event and to resolve the incident
  • The organization implements Improvements by incorporating lessons learned from current and previous detection / response activities

The Recover Function identifies appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity incident. The Recover Function supports timely recovery to normal operations to reduce the impact from a cybersecurity incident.

  • Ensuring the organization implements Recovery Planning processes and procedures to restore systems and/or assets affected by cybersecurity incidents
  • Implementing Improvements based on lessons learned and reviews of existing strategies
  • Internal and external Communications are coordinated during and following the recovery from a cybersecurity incident

This report and the included findings make it painfully obvious that many organizations, even large ones, simply aren't capable of defending their ever-expanding environments. The ongoing unresolved disputes between Managed Service Providers, Vendors, and the organization must be resolved quickly.

It has become more important that organizations utilize the help of specialists, like Critical Path Security, to help close the gaps in their defenses.