CS4: COMPUTER INTEGRITY AND SECURITY

This Unit is about Computer Integrity and Security.

Data integrity

Data integrity is the maintenance of, and the assurance of the accuracy and consistency of, data over its entire life-cycle, and is a critical aspect to the design, implementation and usage of any system which stores, processes, or retrieves data.

The term is broad in scope and may have widely different meanings depending on the specific context – even under the same general umbrella of computing.

It is at times used as a proxy term for data quality, while data validation is a pre-requisite for data integrity. Data integrity is the opposite of data corruption.

The overall intent of any data integrity technique is the same: ensure data is recorded exactly as intended (such as a database correctly rejecting mutually exclusive possibilities,) and upon later retrieval, ensure the data is the same as it was when it was originally recorded.

In short, data integrity aims to prevent unintentional changes to information. Data integrity is not to be confused with data security, the discipline of protecting data from unauthorized parties.

Any unintended changes to data as the result of a storage, retrieval or processing operation, including malicious intent, unexpected hardware failure, and human error, is a failure of data integrity.

If the changes are the result of unauthorized access, it may also be a failure of data security. Depending on the data involved this could manifest itself as benign as a single pixel in an image appearing a different color than was originally recorded, to the loss of vacation pictures or a business-critical database, to even catastrophic loss of human life in a life-critical system.

Integrity types

Physical integrity

Physical integrity deals with challenges associated with correctly storing and fetching the data itself. Challenges with physical integrity may include electromechanical faults, design flaws, material fatigue, corrosion, power outages, natural disasters, acts of war and terrorism, and other special environmental hazards such as ionizing radiation, extreme temperatures, pressures and g-forces.

Ensuring physical integrity includes methods such as redundant hardware, an uninterruptible power supply, certain types of RAID arrays, radiation-hardened chips, error-correcting memory, use of a clustered file system, using file systems that employ block-level checksums such as ZFS, storage arrays that compute parity calculations such as exclusive or use a cryptographic hash function and even having a watchdog timer on critical subsystems.

Physical integrity often makes extensive use of error detecting algorithms known as error-correcting codes. Human-induced data integrity errors are often detected through the use of simpler checks and algorithms, such as the Damm algorithm or Luhn algorithm.

These are used to maintain data integrity after manual transcription from one computer system to another by a human intermediary (e.g. credit card or bank routing numbers). Computer-induced transcription errors can be detected through hash functions.

In production systems, these techniques are used together to ensure various degrees of data integrity. For example, a computer file system may be configured on a fault-tolerant RAID array, but might not provide block-level checksums to detect and prevent silent data corruption.

As another example, a database management system might be compliant with the ACID properties, but the RAID controller or hard disk drive’s internal write cache might not be.

Logical integrity

This type of integrity is concerned with the correctness or rationality of a piece of data, given a particular context. This includes topics such as referential integrity and entity integrity in a relational database or correctly ignoring impossible sensor data in robotic systems.

These concerns involve ensuring that the data “makes sense” given its environment. Challenges include software bugs, design flaws, and human errors.

Common methods of ensuring logical integrity include things such as check constraints, foreign key constraints, program assertions, and other run-time sanity checks.

Both physical and logical integrity often share many common challenges such as human errors and design flaws, and both must appropriately deal with concurrent requests to record and retrieve data, the latter of which is entirely a subject on its own.

Databases

Data integrity contains guidelines for data retention, specifying or guaranteeing the length of time data can be retained in a particular database.

To achieve data integrity, these rules are consistently and routinely applied to all data entering the system, and any relaxation of enforcement could cause errors in the data.

Implementing checks on the data as close as possible to the source of input (such as human data entry), causes less erroneous data to enter the system.

Strict enforcement of data integrity rules results in lower error rates, and time saved troubleshooting and tracing erroneous data and the errors it causes to algorithms.

Data integrity also includes rules defining the relations a piece of data can have, to other pieces of data, such as a Customerrecord being allowed to link to purchased Products, but not to unrelated data such as Corporate Assets.

Data integrity often includes checks and correction for invalid data, based on a fixed schema or a predefined set of rules. An example being textual data entered where a date-time value is required.

Rules for data derivation are also applicable, specifying how a data value is derived based on algorithm, contributors and conditions. It also specifies the conditions on how the data value could be re-derived.

Types of integrity constraints

Data integrity is normally enforced in a database system by a series of integrity constraints or rules. Three types of integrity constraints are an inherent part of the relational data model: entity integrity, referential integrity and domain integrity.

  • Entity integrity concerns the concept of a primary key. Entity integrity is an integrity rule which states that every table must have a primary key and that the column or columns chosen to be the primary key should be unique and not null.
  • Referential integrity concerns the concept of a foreign key. The referential integrity rule states that any foreign-key value can only be in one of two states. The usual state of affairs is that the foreign-key value refers to a primary key value of some table in the database. Occasionally, and this will depend on the rules of the data owner, a foreign-key value can be null. In this case, we are explicitly saying that either there is no relationship between the objects represented in the database or that this relationship is unknown.
  • Domain integrity specifies that all columns in a relational database must be declared upon a defined domain. The primary unit of data in the relational data model is the data item. Such data items are said to be non-decomposable or atomic. A domain is a set of values of the same type. Domains are therefore pools of values from which actual values appearing in the columns of a table are drawn.
  • User-defined integrity refers to a set of rules specified by a user, which do not belong to the entity, domain and referential integrity categories.

If a database supports these features, it is the responsibility of the database to ensure data integrity as well as the consistency model for data storage and retrieval.

If a database does not support these features, it is the responsibility of the applications to ensure data integrity while the database supports the consistency model for the data storage and retrieval.

Having a single, well-controlled, and well-defined data-integrity system increases

  • stability (one centralized system performs all data integrity operations)
  • performance (all data integrity operations are performed in the same tier as the consistency model)
  • re-usability (all applications benefit from a single centralized data integrity system)
  • maintainability (one centralized system for all data integrity administration).

Modern databases support these features (see Comparison of relational database management systems), and it has become the de facto responsibility of the database to ensure data integrity.

Companies, and indeed many database systems, offer products and services to migrate legacy systems to modern databases.

Examples

An example of a data-integrity mechanism is the parent-and-child relationship of related records. If a parent record owns one or more related child records all of the referential integrity processes are handled by the database itself, which automatically ensures the accuracy and integrity of the data so that no child record can exist without a parent (also called being orphaned) and that no parent loses their child records.

It also ensures that no parent record can be deleted while the parent record owns any child records. All of this is handled at the database level and does not require coding integrity checks into each application.

File systems

Various research results show that neither widespread filesystems (including UFS, Ext, XFS, JFS and NTFS) nor hardware RAID solutions provide sufficient protection against data integrity problems.

Some filesystems (including Btrfs and ZFS) provide internal data and metadata checksumming that is used for detecting silent data corruption and improving data integrity.

If a corruption is detected that way and internal RAID mechanisms provided by those filesystems are also used, such filesystems can additionally reconstruct corrupted data in a transparent way.

This approach allows improved data integrity protection covering the entire data paths, which is usually known as end-to-end data protection.

Data integrity as applied to various industries

  • The U.S. Food and Drug Administration has created draft guidance on data integrity for the pharmaceutical manufacturers required to adhere to U.S. Code of Federal Regulations 21 CFR Parts 210–212. Outside the U.S., similar data integrity guidance has been issued by the United Kingdom (2015), Switzerland (2016), and Australia (2017).
  • Various standards for the manufacture of medical devices address data integrity either directly or indirectly, including ISO 13485, ISO 14155, and ISO 5840.
  • Consumer healthcare companies producing over-the-counter therapies must also put safeguards in place to manage data integrity. In addition to compliance with FDA regulations, there is a significant risk to brand and reputation.
  • In early 2017, the Financial Industry Regulatory Authority (FINRA), noting data integrity problems with automated trading and money movement surveillance systems, stated it would make “the development of a data integrity program to monitor the accuracy of the submitted data” a priority. In early 2018, FINRA said it would expand its approach on data integrity to firms’ “technology change management policies and procedures” and Treasury securities reviews.
  • Other sectors such as mining and product manufacturing are increasingly focusing on the importance of data integrity in associated automation and production monitoring assets.
  • Cloud storage providers have long faced significant challenges ensuring the integrity or provenance of customer data and tracking violations.

COMPUTER INTRUSION

Computer intrusions typically involve computer networks, and there is always going to be the potential for valuable evidence on the network transmission medium.

Capturing network traffic can give investigators one of the most vivid forms of evidence, a live recording of a crime in progress (Casey, 2004).

This compelling form of digital evidence can be correlated with other evidence to build an airtight case, demonstrating a clear and direct link between the intruder and compromised hosts.

The problem is that this medium is volatile. If you are not capturing traffic on a network link when a transmission of interest is sent, then it is lost. You will never have access to that data.

It is advisable that you begin capturing traffic in the beginning of an incident, at least for the devices that are exhibiting the initial symptoms of a security event.

Later during the incident, you will be able to readjust your monitoring locations based upon more detailed results of your analysis.

In an ideal world, you would be able to capture all network traffic entering and leaving any network segment that is in the potential scope of the incident, as well as data to and from your network management and data “crown jewels.”

But your organization most likely does not have the capacity for network traffic collection on this scale, so you will need to prioritize your captures.

Your priority should be to collect network traffic to and from known compromised systems, so that you can collect enough information to be able to identify specific artifacts that can be used later to detect attacker activity across other network devices already in place, such as firewalls, proxy servers, and IDS sensors.

Whenever you encounter a source of logs on a network, preserve the entire original log for later analysis, not just portions that seem relevant at the time.

The reason for this admonition is that new information is often uncovered during an investigation and the original logs need to be searched again for this information.

If you do not preserve the original log, there is a risk that it will be overwritten by the time you return to search it for the new findings.

When exporting logs from any system, make sure that DNS resolution is not occurring because this will slow the process and may provide incorrect information (when DNS entries have changed since the time of the log entry).

Hoax Virus Communications

Traditional information security attack analysis focuses on computer intrusions and malware events in which an attacker gains access or causes damage to a system (or network) through the use of technical capabilities to exploit a vulnerability.

In 2015, there were a record number of “mega-breaches” and exponential increases in zero-day exploits and spear-phishing attacks; in previous years, there were troublingly similar numbers.

In this sea of cyber turbulence, mercurial and understated threat permutations are hoax viruses, or e-mails and social media messages that warn of dangerous but nonexistent cybersecurity threats that a user will likely receive.

While not as voluminous or dangerous as other types of malicious cyber activities, hoax viruses remain a consistent, evolving threat in the cyber threat landscape, particularly because attackers are borrowing effective components and precepts from these narratives to amplify new psychological cyber attacks.

While pure hoax viruses still circulate today, the more problematic evolved permutations of hoax viruses are tech support scams, scareware, and to some extent, ransomware.

What distinguishes this attack genre is that it heavily targets the human system operator and relies fundamentally upon deception and influence to succeed.

This section examines the history and typology of hoax viruses, then reveals how hoax virus attackers leverage the principles of persuasion to effectuate their attacks.

As discussed in Chapter 2, Virtual Myths: Internet Urban Legend, Chain Letters, and Warnings, the development of the Internet transformed the spread of urban myths in the 1980s.

As this technology matured and gained user traction, it enabled efficient contact between individuals, especially across large distances. As a result, this became a preferred medium for the spread of urban legends and hoax schemes.

Hoax virus communications were a natural extension of Internet urban myths and other hoax chains, relying upon the speed and efficiency of delivery over CMCs and modifying the scope of these narratives by invoking the specter of catastrophic, devastating computer viruses.

The first documented hoax virus, known as the “2400 Baud Modem” virus (Fig. 3.11), was actually quite verbose, artfully combining a narrative of hypertechnical language (technical acronym, program names, etc.), the connotation of narrator expertise and credibility, and perceived legitimacy of the threat (Heyd, 2008).

Over time, newer and increasingly clever virus hoaxes proliferated and washed over users’ inboxes. Information security companies, researchers and government agencies began to catalog these hoaxes in the same fashion as true malware specimens are analyzed, categorized, and placed in a database for reference. Over years of samples, certain elements and heuristics were derived from these narratives (Gordon, 1997):

Indicators and Elements of a Hoax Virus

Warning announcement/cautionary statement. This is typically in all capital letters (“marquee” effect) to grab the recipient’s attention. The topic is often scary, ominous, compelling, or otherwise emotion-evoking. It urges recipients to alert everyone they know and sometimes tells them to do this more than once.

Pernicious, devastating malware. The content is a warning about a type of pernicious malware spreading on the Internet. Commonly from an individual, occasionally from a company, but never from the cited source, the malware described in the message has horrific destructive powers and often the ability to send itself by e-mail or social media.

Offered expertise and solutions. These messages warn the receiver not to read or download the supposed malware and offers solutions on how to avoid infection by the malware. This may be a list of instructions to self-remediate the system, file deletion, etc.

Credibility bolstering. The message conveys credibility by citing some authoritative source as issuing the warning. Usually the source says the virus is “bad” or has them “worried.”

Technical jargon. Virus hoax messages almost always have technical jargon, many times specious jargon describing the malware or technical threat and consequences.

Consequences. The recipient is advised of impending disaster to his/her computer, network/privacy, or other problematic circumstances that will arise if the message is not heeded.

Request/requirements. The hoax provides instructions or admonitions to the recipient that require him/her to conduct a further action, including or in addition to perpetuating the message. Some hoax viruses provide the recipient with instructions on how to locate and delete critical system files under the guise that they are malicious.

Transmission/perpetuation requirement. The recipient is admonished to continue the trajectory of the message by transmitting or broadcasting it to others via online communication.

Response Methods Based on Layers

The goal of active response is to automatically respond to a detected attack and minimize (or ideally, nullify) the damaging effects of attempted computer intrusions without requiring an administrator.

In general, there are four different strategies for network-based active response, each corresponding to a different layer of the protocol stack starting with the data link layer:

Datalink. Administratively disable the switch port over which the attack is carried. This method does not require that the detection mechanism be inline to the attack traffic. If it is inline, this implies that a race condition exists between the attack and the time required to disable the switch port.

Network. Alter a firewall policy or router ACL to block all packets to or from the attacker’s IP address. Again, the detection mechanism does not have to be inline to the attack traffic, and if it isn’t, the race condition exists between the attack and the time required to reconfigure the firewall policy or router ACL.

Transport. Generate TCP resets for attacks using TCP methods or Internet Control Message Protocol (ICMP) port-unreachable messages, for attacks sent over the UDP. Recall that ICMP is a network-layer protocol, and hence it is possible to block ICMP only at the network layer. Once again, the detection mechanism does not necessarily have to reside on an inline device. Snort can spoof TCP reset packets into an established TCP connection regardless of whether it is running in inline mode.

Application. Alter the data portion of individual packets from the attacker. For example, if the attacker has provided a path to a /usr/bin/gcc compiler, change the packet so that the path points to a location that does not exist on the target system—such as /usr/ben/abc—before the packet reaches the target.

Note that this method may require the recalculation of the transport-layer checksum (mandatory for TCP and optional for UDP, unless the checksum was previously calculated). This method of response requires an inline device that can modify application-layer data en route.

This chapter discusses three software applications: SnortSam, Fwsnort, and snort_inline. Each implements active response capabilities based on the Snort IDS.

These applications alter or block traffic by IP address (SnortSam), by transport-layer protocol (Fwsnort), and by the application layer (snort_inline).

We will show how each active response application deals with a reconnaissance attack against the WWWboard discussion forum running on an Apache Web server, and a buffer overflow exploits in the Network File System (NFS) mounted daemon.

Note that this chapter focuses on how to automatically respond to attacks; we do not concentrate on complex or new exploits and we have deliberately chosen simplistic attack examples for illustration purposes.

Deploying active response capabilities on a network requires extremely careful tuning and a healthy awareness of the risks involved.

One of the chief problems with IDSes today is that false positives are commonplace, even from the most finely tuned IDS.

Unless you tune your IDS to the point of ignoring most attacks, it is simply impossible to avoid false positives when legitimate traffic can potentially contain some of the same characteristic signatures as malicious traffic.

Hence, there is always the possibility that an active response system will block traffic that really should be allowed through.

On a more sinister note, if an attacker discovers that active response is in use on a network, it may be possible for the attacker to subvert the response system into effectively creating a denial of service (DoS) attack against the network by making it appear as though attacks are coming from legitimate sources.

The attacker accomplishes this by crafting malicious-looking packets from faked sources, such that the automated active response blocks legitimate traffic from those sources.

This risk of self-imposed DoS is one of the primary reasons why many corporations are hesitant to implement active response mechanisms.

Most tools that offer an active response (including the ones mentioned here) also offer the capability to define the traffic that should never be blocked (a.k.a. whitelists). If the product you choose to implement doesn’t offer this capability, you might want to think twice before deploying it.

Increasingly, digital investigators are encountering cases that involve the use of removable media, USB devices, specifically.

Whether the investigation deals with the theft of intellectual property, the possession of child pornography, embezzlement, or even computer intrusion, USB devices could potentially be related to the crime.

As such, examiners often need to get some idea of when, how many, and what types of USB devices have been connected to the computer(s) they are examining.

The Windows registry houses a wealth of information pertaining to USB devices that have been connected to a system. Data found in the SYSTEM\<ControlSet###>\Enum\USBSTOR subkey shown in Figure 5.42 can be particularly helpful.

The first-level subkeys under USBSTOR, such as Disk&Ven_SanDisk&Prod_Cruzer_Mini&Rev_0.2 in Figure 5.42, are device class identifiers taken from device descriptors and used to identify a specific kind of USB device.

The second-level subkeys (e.g., SNDK5CCDD5014E009703&0 in Figure 5.42) are unique instance identifiers used to identify specific devices within each class.

The unique instance identifier of a device is either the device’s serial number or (if the device does not have a serial number reported in its device descriptor) a pseudorandom value derived by Windows to uniquely identify the device.

Each unique instance identifier generally represents one USB device; so, seeing two different unique instance identifiers under one device class identifier could indicate that two different devices of similar type and manufacture were plugged into the system (such as two different 4GB SanDisk Cruzer thumb drives).

The unique class identifier can also be used to obtain other information important to the investigator. For example, being that the unique instance identifier generally stays the same for a specific device on each Windows system to which it is connected, seeing the same unique instance identifier on multiple systems can be an indicator of the same device’s use with each of those computers.

Further, by write-protecting a seized USB device and plugging it into a virgin forensic computer, an examiner could record the unique instance identifier (and other device information) populated in the forensic computer’s registry and compare it to the registry of a suspect system to determine if that seized device had been similarly connected.

Tests performed with several hardware USB write-blockers have shown that Windows often populates a unique instance identifier for the write-blocking device rather than the suspect USB device, so a software write-blocking method may be preferred.

Whether a hardware or software USB write-blocking method is selected, the examiner should perform tests in advance with nonevidence media and observe and record the results.

A device’s unique identifier can also be found by viewing the Device Instance Id under the device’s properties in the Windows Device Manager on a running system to which the device is connected as shown in Figure 5.43.

Figure 5.43. Viewing a device’s properties in the Windows Device Manager reveals the device’s unique identifier, which can be matched to a corresponding registry subkey in the USBSTOR.

Additionally, by searching for a device’s unique instance identifier in the c:\windows\setupapi.log file, an examiner can determine the first time a USB device was connected to a system.

[2008/12/20 16:54:28 1084.7 Driver Install]

#-019 Searching for hardware ID(s): usbstor\ disksandisk_cruzer_mini_____0.2_,usbstor\ disksandisk_cruzer_mini_____,usbstor\disksandisk_,usbstor\sandisk_cruzer_mini_____0,sandisk_cruzer_mini_____0,usbstor\gendisk,gendisk

#-018 Searching for compatible ID(s): usbstor\disk,usbstor\raw

#-198 Command line processed: C:\WINDOWS\system32\services.exe

#I022 Found “GenDisk” in C:\WINDOWS\inf\disk.inf; Device: “Disk drive”; Driver: “Disk drive”; Provider: “Microsoft”; Mfg: “(Standard disk drives)”; Section name: “disk_install”.

#I023 Actual install section: [disk_install.NT]. Rank: 0x00000006. Effective driver date: 07/01/2001.

#-166 Device install function: DIF_SELECTBESTCOMPATDRV.

#I063 Selected driver installs from section [disk_install] in “c:\windows\inf\disk.inf”.

#I320 Class GUID of device remains: {4D36E967-E325-11CE-BFC1-08002BE10318}.

#I060 Set selected driver.

#I058 Selected best compatible driver.

#-166 Device install function: DIF_INSTALLDEVICEFILES.

#I124 Doing copy-only install of “USBSTOR\DISK&VEN_SANDISK&PROD_CRUZER_MINI&REV_0.2\SNDK5CCDD5014E009703&0”.

#-166 Device install function: DIF_REGISTER_COINSTALLERS.

Further, the examiner can locate the following registry subkeys:

SYSTEM\<ControlSet###>\Control\DeviceClasses\{53f56307-b6bf-11d0-94f2-00a0c91efb8b}

SYSTEM\<ControlSet###>\Control\DeviceClasses\{53f5630d-b6bf-11d0-94f2-00a0c91efb8b}

Subkeys in these locations can be seen to correspond with specific devices by unique instance identifier, with the Last Written date-time stamp on the device subkey indicating the last time the device was connected to the system (Figure 5.44).

Figure 5.44. The Last Written date-time stamp of the corresponding device subkey in DeviceClasses indicates that the device was connected to the system on 05/04/07.

The &0 at the end of the unique instance identifiers (whether serial number or Windows-derived) can be incremented to denote a related device.

For example, when a U3-enabled device is plugged into a system, it actually results in the creation of two virtual USB devices, a CD-Rom device and a Disk device; if the unique instance identifier for the virtual CD-Rom device is 6&38b32a79&0, it is likely that the complimentary virtual Disk device will be 6&38b32a79&1.

Values located under each unique instance identifier subkey can include the ParentIdPrefix (not always populated) and FriendlyName for each device.

The FriendlyName is nothing more than a more detailed and less complex description of the device, which can contain manufacturer and model information (such as “Patriot Memory USB Device”).

It should be noted that numerous devices can report the same FriendlyName value, so this should not be used as a reliable means of unique device identification.

The ParentIdPrefix is a Windows-derived value that (if present) can be used to link each device with additional information. For example, the SYSTEM/MountedDevices subkey often contains values similar to those in Figure 5.45.

The underlying data associated with each of these device values contains a description of the device that was last mounted on that drive letter, including the device’s ParentIdPrefix.

Forensic examiners seeking to analyze USB activity on a system they are examining can, of course, conduct their analysis manually.

However, tools do exist to make the analysis easier, or at least a bit faster. Many of the forensic tool suites (e.g., EnCase, FTK, etc.) have scripts or add-on functionality that allow an examiner to dig the USBSTOR information from the registry and display it in a report-style format.

UVCView is a Microsoft development tool (http://msdn.microsoft.com/en-us/library/aa906848.aspx) that allows examiners to view USB device descriptors (the source of much of what is populated in the registry); it can be difficult to find, but at the time of this writing it is available at ftp://ftp.efo.ru/pub/ftdichip/Utilities/UVCView.x86.exe.

Another popular tool is USBDeview by NirSoft, which reads the SYSTEM hive to which it is pointed and attempts to do much of the association between USBSTOR, DeviceClasses, and MountedDevices data for the user.

Whatever tool is chosen, the examiner should always validate the results before placing them in a report or taking them to court.

Recent research by Rob Lee (Mandiant) and Harlan Carvey has also shed light on the HKLM\Software\Microsoft\Windows Portable Devices\Devicessubkey, which contains data similar to that in the USBSTOR, including a history (partial, at least) of USB devices plugged into Windows Vista and Windows 7 systems.

An added benefit of this subkey is that it can show that multiple different devices were mounted under a particular drive letter on the system (e.g., E:\), thereby providing a longer historical record for an examiner; this is in contrast to the MountedDevices subkey discussed earlier, which shows only the last device mounted under a particular drive letter.

The port_dev.pl plugin for RegRipper (www.regripper.net/) parses data from the Windows Portable Devices\Devices subkey and provides an easy-to-read output, including each listed device’s FriendlyName, serial number (or unique identifier), and drive letter under which the device was mounted (providing these data are available in the subkey).

Forensic examiners should also keep in mind that link files can be an excellent source of data about the connection of external devices. Recall that link files contain the full path to their target file, including the drive letter and serial number of the volume on which the target file resides.

Matching the volume serial number in a link for a target file opened from removable media with the volume serial number of a seized USB thumb drive, and then matching that USB thumb drive to a specific computer via the registry is a pretty good way to get an investigation rolling.

THIS VIDEO EXPLAINS MORE ABOUT COMPUTER INTRUSION DETECTION

 

Global Information Assurance Certification Forensic Analyst (GCFA)

The Global Information Assurance Certification Forensic Analyst (GCFA) certifies that the individual has the knowledge, skills, and abilities to utilize state-of-the-art forensic analysis techniques to solve complicated Windows- and Linux-based investigations.

GCFA experts can articulate complex forensic concepts such as the file system structures, enterprise acquisition, complex media analysis, and memory analysis.

GCFAs are front line investigators during computer intrusion breaches across the enterprise. They can help identify and secure compromised systems even if the adversary uses anti-forensic techniques.

Using advanced techniques such as file system timeline analysis, registry analysis, and memory inspection, GCFAs are adept at finding unknown malware, rootkits, and data that the intruders thought had eliminated from the system.

This certification will ensure you have a firm understanding of advanced incident response and computer forensics tools and techniques to investigate data breach intrusions, tech-savvy rogue employees, advanced persistent threats, and complex digital forensic cases.

GCFA certification tests knowledge that is not geared for only law enforcement personnel, but for corporate and organizational incident response and investigation teams that have different legal or statutory requirements compared to a standard law enforcement forensic investigation.

To learn more about the GCFA and the SANS Institute, visit their website at http://computer-forensics.sans.org/certification/gcfa.

Digital forensics is the “application of computer science and investigative procedures for a legal purpose involving the analysis of digital evidence.”

Less formally, digital forensics is the use of specialized tools and techniques to investigate various forms of computer-oriented crime including fraud, illicit use such as child pornography, and many forms of computer intrusions.

Digital forensics as a field can be divided into two subfields: network forensics and host-based forensics. Network forensics focuses on the use of captured network traffic and session information to investigate computer crime.

Host-based forensics focuses on the collection and analysis of digital evidence collected from individual computer systems to investigate computer crime.

Digital forensics is a vast topic; a comprehensive discussion is beyond the scope of this chapter. Interested readers are referred to Jones [25] for more detail.

In the context of intrusion detection, digital forensic techniques can be used to analyze a suspected compromised system in a methodical manner.

Forensic investigations are most commonly used when the nature of the intrusion is unclear, such as those perpetrated via a zero-day exploit, but in which the root cause must be fully understood either to ensure the exploited vulnerability is properly remediated or to support legal proceedings.

Owing to the increasing use of sophisticated attack tools and stealthy and customized malware designed to evade detection, forensic investigations are becoming increasingly common, and sometimes only a detailed and methodical investigation will uncover the nature of an intrusion.

The specifics of the intrusion may also require a forensic investigation such as those involving the theft of PII in regions covered by one or more data breach disclosure laws.

Periodic Testing

There is an old Russian proverb famously used by Ronald Reagan during his presidency: “Trust yet verify.” An organization can offer all of the training in the world, but at some point, they will want to make sure that it is actually working. There are many ways to put a company’s information governance policies to the test.

In the corporate world, computer penetration testing—often known as pen testing—is a common practice. After firewalls and other software fixes are put into place to keep out the criminals, organizations can hire companies that will act as the bad guys and attempt to hack into the computer system.

If they are able to conduct a successful computer intrusion, the organization will have the information they need to put patches into place.

Conversely, if the pen testers are unsuccessful, the organization will have confidence in their established protections, at least until smarter criminals come along who possess a higher level of skill.

Pen testing should be conducted periodically to ensure the best possible protections are put into place.

Pen testing is a means to test the physical equipment, but as discussed throughout this book, employees also pose various levels of threats to their employer’s sensitive information.

Some of the ways to test an employee’s knowledge and training include sending them an e-mail that might be considered a use of social engineering.

Whether phishing for information directly or seeking access to the computer system by obtaining passwords, social engineering tests can identify gaps in training and places where resources need to be directed to fill these gaps.

Beyond computers, employee testing can also be conducted in the form of ruse telephone calls seeking inside information from an employee by pretending to be a higher up in the corporation.

If the employee follows the appropriate verification protocols, perhaps they can be rewarded, providing an incentive for all employees to be just as cautious.

If the employee fails the test, education and retraining can be offered to correct problems.

THIS LINK EXPLAINS MORE ABOUT INTRUSION

https://en.wikipedia.org/wiki/Intrusion_detection_system

https://en.wikipedia.org/wiki/Computer_security

THIS VIDEO EXPLAINS MORE ABOUT COMPUTER SECURITY

 

SEE ALL Add a note
YOU
Add your Comment
 
X