Posted on Leave a comment

Wide-scale Petya variant ransomware attack noted, (Tue, Jun 27th)

Sent from a reader earlier today:

  • Hearing some rumors that the company Merck is having a major virus outbreak with something new and their Europe networks are affected more than their US offices. Have you heard anything on this?

A quick check reveals that, apparently, another global ransomware attack is making the rounds today.

Initial reports indicate this is much like last months WannaCry attack. According to the Verge article, todays ransomware appears to be a new Petya variant called Petyawrap. At this point, we see plenty of speculation on how the ransomware is spreading (everything from email to an EternalBlue-style SMB exploit), but nothing has been confirmed yet for the initial infection vector.

Alleged samples of this ransomware include the following SHA256 hashes:

AlienVault Open Threat Exchange (OTX) is currently tracking this threat at:

Well provide more information as it becomes available.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Posted on Leave a comment

A Tale of Two Phishies, (Tue, Jun 27th)


Has anyone read A Tale of Two Cities, the 1859 novel by Charles Dickens? Or maybe seen one of the movie adaptations of it? Its set during the French Revolution, including the Reign of Terror, where revolutionary leaders used violence as an instrument of the government.

In the previous sentence, substitute violence with email. Then substitute government with criminals. Now what do you have? Email being used as an instrument of the criminals!

I know, I know… No real ties to Dickens novel here. border-width:2px” />
Shown above: Thats all I got–a somewhat clever title for this diary.

This diary briefly investigates two phishing emails. Its a Tale of Two Phishies I ran across on Monday 2017-06-26.

First example: an unsophisticated phish

The first example went to my blogs admin email address. It came from the mail server of an educational institution in Paraguay, possibly used as a relay from an IP address in South Africa. For email headers, you can only rely on the Received: header right before the message hits your mail server. Anything before that can be spoofed.

Its a pretty poor attempt, because this phishing message is very generic. Im educated enough to realize this didnt come from my email provider. And the login page was obviously fake. Unfortunately, some people might actually be fooled by this.

The compromised website hosting a fake login page was quickly taken off line. You wont be able to replicate the traffic by the time you read this. It border-width:2px” />
Shown above: border-width:2px” />
Shown above: border-width:2px” />
Shown above: The fake login page from link in the phishing email.

Second example: a slightly more complex phish

Every time I see a phishing message like this second example, I hope theres malware involved. border-width:2px” />
Shown above: The second phishing email.

Examining the PDF attachment, I quickly realized the criminals had made a mistake. They forgot to put .com at the end of the domain name in the URL from the PDF file. lillyforklifts should be Id checked the URL early Monday morning with .com at the end of the domain name, and it worked. border-width:2px” />
Shown above: PDF attachment from the second phishing email.

An elephant in the room

These types of phishes are what I call an elephant in the room. Thats an English-language metaphor. Elephant in the room represents an obvious issue that no one discusses or challenges. These types of phishing emails are very much an elephant in the room for a lot of security professionals. Why? Because we see far more serious issues during day-to-day operations in our networks. Many people (including me) feel we have better things to worry about.

But these types of phishing emails are constantly sent. They represent an on-going threat, however small they might be in comparison to other issues.

Messages with fake login pages for Netflix, Apple, email accounts, banks, and other organizations occur on a daily basis. For example, on, the stats page indicates an average of 1,000 to 1,500 unique URLs were submitted on a daily basis during the past month. Stats for specific months show 58,556 unique URLs submitted in May 2017 alone.

Fortunately, various individuals on Twitter occasionally tweet about the fake login pages they find. Of course, many people also notify sites like PhishTank,, and many other resources to fight this never-ending battle.

So today, its open discussion on these phishing emails. Do you know anyone thats been fooled by these messages? Are there any good resources covering these phishing emails I forgot to mention? If so, please share your stories or information in the comments section below.

Brad Duncan
brad [at]

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Posted on Leave a comment

Investigation of BitTorrent Sync (v.2.0) as a P2P Cloud (Part 1), (Mon, Jun 26th)

[This is the first part of a multi-part a guest diary written byDr. Ali Dehghantanha]

One of the nightmares of any forensics investigator is to come across a new or undocumented platform or application during an investigation with tight deadlines! The investigator has only limited research time to detect evidences hoping not to miss any essential remnants! Fortunately there is a field of research called Residual Data Forensic in which researchers detect and document remnants (evidence) of forensic value of user activities on different platforms. Residual forensic researchers are usually listing minimum evidences that can be extracted by a forensics practitioner.

In one of my recent engagements, I had to investigate BitTorrent Sync version 2.0 on a range of different devices. Back then I used papers authored by Scanlon, M., Farina et al., (Refer to References 1,2,3,4) on the investigation of BitTorrent Sync (version 1.1.82). However, as a redesigned folder sharing workflow has been introduced in the newer version of BitTorrent Sync (from version 1.4 onwards), there is a need to develop an up-to-date understanding of the artefacts from the newer BitTorrent Sync applications.

In a series of diaries I am going to discuss about residual artefacts of BitTorrent Sync version 2.0 on Windows 8.1, Mac OS X Mavericks 10.9.5, Ubuntu 14.04.1 LTS, iOS 7.1.2, iPhone 4 running iOS 7.1.2 and a HTC One X running Android KitKat 4.4.4 (For a more involved reading which include experiment setup and full details of our investigation please refer to our paper titled Forensic Investigation of P2P Cloud Storage: BitTorrent Sync as a Case Study (Reference 5)). Please feel free to comment about any other evidences that you came across in your investigations and/or suggest other investigation approach.

This diary post explains artefacts of directory listings and files of forensic interest of BitTorrent Sync version 2.0 on Windows 8.1, Mac OS X Mavericks 10.9.5, and Ubuntu 14.04.1 LTS.

The downloaded folders were saved at %Users%[User Profile]BitTorrent Sync, /home/[User profile]/BitTorrent Sync, and /Users/[User Profile]/BitTorrent Sync on the Windows 8.1, Ubuntu OS, and Mac OS clients by default, respectively. Within the shared folders (both locally added and downloaded) there is a hidden .sync subfolder. The file of particular interest stored within the subfolder is the ID file which holds the folder-specific share ID in hex format. The share ID would be especially useful when seeking to identify peers sharing the same folder during network analysis.

When a synced file was deleted, copies of the deleted file can be recovered from the /.sync/Archive folder of the corresponding peer devices. It is important to note that the deleted files will only be kept in the archive folder for 30 days by default. Copies of the deleted files alongside the pertinent file deletion information (e.g., the original paths, file sizes, and deletion times) can be recovered from the %$Recycle.Bin%SID folder on Windows 8.1, but the files are renamed to a set of random characters prefixed with $R and $I. On Ubuntu machine, copies of deleted files can be recovered from /home/[User Profile]/.local/share/Trash/files folder. Original file path and deletion time can be recovered from .TRASHINFO files located in /home/[User Profile]/.local/share/Trash/info/. In contrast to Windows and Ubuntu OS, examination of the Mac OSX trash folder (located at /Users/[User profile]/.Trash) only recovered copies of the deleted files. However, it is noteworthy that the findings are only applicable to the system that initiated the file deletion and as long as the recycle bin or trash folder is not emptied. A practitioner could potentially recover the BitTorrent Sync usage information from various metadata files resided in the application folder located at %AppData%RoamingBitTorrent Sync on Windows 8.1 and /Users/[User Profile/Library/Application Support/BitTorrent Sync on Mac OSX.

The application folder maintains a similar directory structure across multiple operating systems, and the /%BitTorrent Sync%/.SyncUserRandom number subfolder is an identity-specific application folder that will be synchronised across multiple devices sharing the same identity. The first file of particular interest within the application folder is settings.dat which maintains a list of metadata associated with the device under investigation such as the installation path (which could be distinguished by the exe_path entry), installation time in Unix epoch format (install_time), non-encoded peer ID (peer_id), log size (log_size), registered URLs for peer search (search_list, tracker_last etc.), and other information of relevance. The second file of forensic interest within the application folder is the sync.dat which contains a wealth of information relating to the shared folders downloaded to the device under investigation. In particular, the device name could be discerned from the device entry. The identity entry records the identity name (name) of the device under investigation as well as the private (private_keys) and public keys (public_keys) used to establish connections with other devices. A similar finding was observed for the peer identities in identities entry. A replication of the identity and identities entries can be located in the local-identity-specific /%BitTorrent Sync%/.SyncUserRandom number/identity.dat file and peer-identity-specific /%BitTorrent Sync%/.SyncUserRandom number/identities/[Certificate fingerprint] file (with the exception of the private key) respectively. The access-requests entry holds a list of metadata pertaining to the identities which sent folder access requests to the device under investigation such as the last used IP addresses in network byte order (addr), identity names (name), public keys public_keys) of the requesting identities, as well as base32-encoded temporary keys (invite), requested folder IDs, requested times (req_time), requested permissions (requested_permissions where 2 indicates read only, 3 indicates read and write, and 4 indicates owner), and granted permission (granted_permissions).

Located within the folders entry of the sync.dat file was metadata relating to the synced folders. It should be noted that this entry will never be empty as it will always contain at least an entry for the identity-specific /%BitTorrent Sync%/SyncUserRandom number application folder. Amongst the information of forensic interest recoverable from the folders entry included the folder IDs (folder_id), storage paths (path), the addition and last modified dates in Unix epoch format, the peer discovery method(s) used to share the synced folders, the access and root certificates keys, whether the folders have been moved to trash, and other information of relevance. Correlating the folder IDs recovered from folders entry with the folder IDs located in /%BitTorrent Sync%/SyncUserRandom number/devices/[Base32-encoded Peer ID]folders may determine the shared folders associated with a peer device. Analysis of the access control list (acl) subentry (of the folders entry) can be used to identify the permissions of identities associated with each shared folder, such as the identity names (name), public keys (public_keys), signature issuers, the times when the identities were linked to a specific shared folder, as well as other information of relevance. Similar details can be located in the folder-specific /%BitTorrent Sync%/.SyncUserRandom number/folders/[Folder ID]/info.dat file. The peers subentry (of the folders entry), if available, would provide a practitioner information about the peers associated with the shared folders added by the device under investigation such as the last completed sync time (last_sync_completed), last used IP address (last_addr) in network byte order, device name (name), last seen time (last_seen), last data sent time (last_data_sent), and other relevant information.

Another file of interest which can potentially allow a practitioner to recover the sync metadata is the /%BitTorrent Sync%/[share-ID].db SQLite3 database. This share-ID-specific database describes the content of a shared folder (including the /%BitTorrent Sync%/SyncUserRandom number application folder) such as the shared filenames or folders (stored in the path table field of the files table), hashes, and transfer piece registers for the shared files or folders. Once the shared filenames or folders have been identified, a practitioner may map the details to the /%BitTorrent Sync%/history.dat file (which maintains a list of file syncing events appeared in the History of the BitTorrent Sync client application) to obtain the sync times in Unix epoch format as well as the associated device names width:300px” />

Figure 1: History.dat file

/%BitTorrent Sync%/ file holds the last used process identifier (PID) which can be used to correlate data with physical memory remnants (e.g., mapping a string of relevance to the data resided in the memory space of investigating PID using the yarascan function of Volatility). It is important to note that all the metadata files aforementioned are Bencoded (with the exception of the file) and the old metadata files would have. width:300px” />

Figure 2:

Disconnecting a shared folder, it was observed that no changes were made to the peer devices, even when the option delete files from this device was selected to permanently delete the sync files/folders from the local device. Unlinking an identity from investigated devices, it was observed that the identity-specific /%BitTorrent Sync%/.SyncUserRandom number application folder will be deleted from the local device. However, only the identity-specific metadata will be removed from the identity and identities entries of the local and peer devices settings.dat files.

Undertaking uninstallation of the Windows client application would remove synced folders from folders containing the .sync subfolder in the directory listing. Manual uninstallation of the Linux and Mac client applications left no trace of the client application usage/installation in the directory listing, but (obviously) deleted files/folders were recoverable from the non-emptied /Users/[User profile]/.Trash folder of the Mac OSX VM investigated.

Undertaking data carving of unallocated spaces (of the file synchronisation VMs) could recover copies of synced files as well as the log and metadata files of forensic interest (e.g., sync.log, sync.dat, history.dat, and settings.dat used by the client applications). A search for the terms bittorrent, bencode keys specific to the metadata files of relevance, as well as the pertinent log entries was able to locate copies of the recovered files. The remnants remained even after uninstallation of client applications, which suggested that unallocated space is an important source for recovering deleted BitTorrent Sync or synced files.

Our next post would describe investigation of BitTorrent log files.


1)Scanlon, M., Farina, J. and Kechadi, M. T. (2014a) BitTorrent Sync: Network Investigation Methodology, In IEEE, pp. 2129, [online] Available from: (Accessed 11 March 2015).

2)Scanlon, M., Farina, J., Khac, N. A. L. and Kechadi, T. (2014b) Leveraging Decentralization to Extend the Digital Evidence Acquisition Window: Case Study on BitTorrent Sync, arXiv:1409.8486 [cs], [online] Available from: (Accessed 18 March 2015).

3) Scanlon, M., Farina, J. and Kechadi, M.-T. (2015) Network investigation methodology for BitTorrent Sync: A Peer-to-Peer based file synchronisation service, Computers Security, [online] Available from: (Accessed 9 July 2015).

4) Farina, J., Scanlon, M. and Kechadi, M. T. (2014) BitTorrent Sync: First Impressions and Digital Forensic Implications, Digital Investigation, Proceedings of the First Annual DFRWS Europe, 11, Supplement 1, pp. S77S86.

5) Teing Yee Yang, Ali Dehghantanha, Kim-Kwang Raymond Choo, Forensic Investigation of P2P Cloud Storage: BitTorrent Sync as a Case Study, (Elsevier) International Journal of Computers Electrical Engineering, 2016.

Find out more about Dr. Ali Dehghantanha at

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Posted on Leave a comment

Traveling with a Laptop / Surviving a Laptop Ban: How to Let Go of "Precious", (Mon, May 29th)

For a few months now, passengers on flights from certain countries are no longer allowed to carry laptops and other larger electronic devices into the cabin. Many news media reported over the last weeks that this policy may be expanded to flight from Europe, or to all flights entering the US. But even if you get to keep your laptop with you during your flight, it is difficult to keep it at your site when you travel. So regardless if this ban materializes or not (right now it looks like it will not happen), this is your regular reminder on how to keep your electronics secure while traveling.

Checking a laptop is considered inadvisable for a number of reasons:

– Your laptop is out of your controland could be manipulated. It is pretty much impossible to secure a laptop if an adversary has control of it for a substantial amount of time. These attacks are called sometimes called evil maid attacks in reference to having the laptop manipulated while it is stored in a hotel room.

– Laptops often are stolen from checked luggage. Countless cases have been reported of airport workers, and in some cases, TSA employees, stealing valuables like laptops from checked luggage.

– Laptops contain lithium batteries which are usually not allowed to be checked as there have been instances of them exploding (and this fact may very likely block the laptop ban)

You are typically not allowed to lock your checked luggage. And even if you lock it, most luggage locks are easily defeated. The main purpose of a lock should be to identify tampering, not to prevent tampering or theft.

Here are a couple of things that you should consider when traveling with your laptop, regardless of where you keep it during your flight:

– Full disk encryption with pre-boot authentication. This is a must of any portable device, no matter where you are flying. You will never be able to fully control your device. Larger devices like laptops are often left unattended in a hotel room, and hotel safes provide minimal security.

– Power your device down. Do not just put it to sleep. For checked luggage, this may even prevent other accidents like overheating if the laptop happens to wake up. But powering the laptop down will also make sure encryption keys can not be recovered from memory.

– Some researchers suggest covering the screws on your laptop in glitter nail polish. Take a picture before departure and use it to detect tampering.

– Take a blank machine, and restore it after arrival from a network backup. This may not be practical, in particular for international travel. But you could do the same with a disk backup, and so far, USB disks are still allowed as carry-on and they are easier to keep with you. Encrypt the backups.

– Take a blank machine and use a remote desktop over the network. Again, this may not work in all locations due to slow network speeds and high costs. But this is probably the most secure solution.

– If you are lucky enough to own a laptop with removable hard drive, then remove it before checking your luggage.

– Before departure, setup a VPN endpoint that allows connections on various ports and via HTTP proxies (e.g. OpenVPN has a mode allowing this). You never know what restrictions you run into. Test the VPN before you leave!

Have a plan for what happens if your laptop is lost or stolen. How will you be able to function? Even if you do not have a complete backup of your laptop with you, a USB stick with important documents that you will need during your trip is helpful, as well as a cloud-based backup. You may want to add VPN configuration details and certificates to the USB stick so you can connect to one if needed. Be ready to use a loaner system for a while with unknown history and configuration to give a presentation, or even to use for webmail access. This is a very dangerous solution, and you should reset any passwords that you used on the loaner system as soon as possible. But sometimes you have to keep going under less than ideal circumstances. Of course, right now, you can still bring your phone onboard, which should be sufficient for e-mail in most cases.

In general, this advice should be obeyed anyway when traveling. It is very hard to stay not leave your laptop unsupervised over a long trip. If you dont trust hotel safes (and you should not trust them), then it may make sense to bring your own lockable container like a Pelikan case with solid locks (Pelikan also makes a backpack that works reasonably well but is a bit bulky and heavy). Dont forget a cable to attach the case to something. Just dont skimp on the locks and again: The goal is to detect tampering/theft, not to prevent it. Any case that you can carry on an airplane can be defeated quickly with a hacksaw or a crowbar, and usually, it takes much less.

Also, see this Ouch! Newsletter about staying secure while on the road:

Johannes B. Ullrich, Ph.D., Dean of Research, SANS Technology Institute

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Posted on Leave a comment

Fake DDoS Extortions Continue. Please Forward Us Any Threats You Have Received., (Fri, Jun 23rd)

We do continue to receive reports about DDoS extortion e-mail. These e-mails are essentially spammed to the owners of domains based on whois records. They claim to originate from well-known hacker groups like Anonymous who have been known to launch DDoS attacks in the past. These e-mails essentially use the notoriety of the groups name to make the threat sound more plausible. But there is no evidence that these threats originate from these groups, and so far we have not seen a single case of a DDoS being launched after a victim received these e-mails. So no reason to pay 🙂

Here is an example of an e-mail (I anonymized some of the details like the bitcoin address and the domain name)

We are Anonymous hackers group.
Your site [domain name] will be DDoS-ed starting in 24 hours if you dont pay only 0.05 Bitcoins @ [bit coin address]
Users will not be able to access sites host with you at all.
If you dont pay in next 24 hours, attack will start, your service going down permanently. Price to stop will increase to 1 BTC and will go up 1 BTC for every day of attack.
If you report this to media and try to get some free publicity by using our name, instead of paying, attack will start permanently and will last for a long time.
This is not a joke.
Our attacks are extremely powerful – over 1 Tbps per second. No cheap protection will help.
Prevent it all with just 0.05 BTC @ [bitcoin address]
Do not reply, we will not read. Pay and we will know its you. AND YOU WILL NEVER AGAIN HEAR FROM US!
Bitcoin is anonymous, nobody will ever know you cooperated.

This particular e-mail was rather cheap. Other e-mails asked for up to 10 BTC.

There is absolutelyno reason to pay any of these ransoms. But if you receive an e-mail like this, there are a couple of things you can do:

  • Verify your DDoS plan: Do you have an agreement with an anti-DDoS provider? A contact at your ISP? Try to make sure everything is set up and working right.
  • We have seen these threats being issued against domains that are not in use. It may be best to remove DNS for the domain if this is the case, so your network will not be affected.
  • Attackers often run short tests before launching a DDoS attack. Can you see any evidence of that? A brief, unexplained traffic spike? If so, then take a closer look, and it may make the threat more serious if you can detect an actual test. The purpose of the test is often to assess the firepower needed to DDoS your network

And please forward any e-mails like this to us. It would be nice to get a few more samples to look for any patterns. Like I said above, this isnt new, but people appear to still pay up to these fake threats.

Johannes B. Ullrich, Ph.D., Dean of Research, SANS Technology Institute

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Posted on Leave a comment

Obfuscating without XOR, (Thu, Jun 22nd)

Malicious files are generated and spread over the wild Internet daily (read: hourly). The goal of the attackers is to use files that are:

  • not know by signature-based solutions
  • not easy to read for the human eye

Thats why many obfuscation techniques existto lure automated tools and security analysts. In most cases, its just a question of time to decode the obfuscated data. A classic technique is to use the XOR cypher[1]. This is definitively not a new technique(see a previous diary[2] from 2012) but it still heavily used. And many tools can automate the search for XORd string. Viper, the binary analysis and management framework, is a good example. It can scan for XOR padding:5px 10px”>
viper tmpnYaBJs xor -a
[] Searching for the following strings:
– This Program
– GetSystemDirectory
– CreateFile
– IsBadReadPtr
– IsBadWritePtrGetProcAddress
– LoadLibrary
– WinExec
– CreateFileShellExecute
– CloseHandle
– UrlDownloadToFile
– GetTempPath
– ReadFile
– WriteFile
– SetFilePointer
– GetProcAddr
– VirtualAlloc
– http
] Hold on, this might take a while…
[] Searching XOR
[!] Matched: http with key: 0x74
] Searching ROT
viper tmpnYaBJs padding:5px 10px”>
var bcacfdfaebbbfDeck = new ActiveXObject(dbdbfaeefccaee(+L+^%^LK%,LpL(KeL^%z%+%u%u

I took some time to check how the obfuscation was performed. How does it work?

The position of each character is searched in the $data variable and decreased by one. Then the character at this position is returned to build a string of hexcodes. Finally, the hex codes are converted into the final string. Example with the two first characters of the example above:

$data =SYOm7L-3^ojXtMA2Kbk_FN)GB.$1PJgR



  • + is located at pos 20, search the character at position 19 (20 – 1): 5
  • L is located at pos 5, search the character at position 4 (5 – 1): 7
  • 57 is the hex code for W padding:5px 10px”>
    // Convert a string from hex chars to string.
    // In: 575363726970742E7368656C6C
    // Out:
    var bufferout = i

    // Convert the obfuscate string by shifting by 1 char
    function deobfuscate(string,step){
    var data = SYOm7L-3^ojXtMA2Kbk_FN)GB.$1PJgR
    var bufferout = i
    if (p2 padding:5px 10px”>
    var s = deobfuscate(%zL(L(Lp^2KNKN^P^z^+Ke^P^+^(Ke^+^KKe^P^p^PKN%u%N%L%NKe%,%0%L padding:5px 10px”>

    And when you understand how to deobfuscate, it padding:5px 10px”>
    function obfuscate(string,step){
    var data = SYOm7L-3^ojXtMA2Kbk_FN)GB.$1PJgR
    var bufferout = i j
    if (p2
    if (p2==l2)
    padding:5px 10px”>
    var foo = obfuscate( padding:5px 10px”>

    Of course, the method analyzedhere is a one shot! The number of ways to obfuscate data is unlimited…


    Xavier Mertens (@xme)
    ISC Handler – Freelance Security Consultant
    PGP Key

    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

  • Posted on Leave a comment

    It has been a month and a bit how is your new patching program holding up?, (Wed, Jun 21st)

    Last months entertainment for many of us was of course the wannacray ms17-010 update. For some of you it was a relaxing time just like any other month. Unfortunately for the rest of us it was a rather busy period trying to patch systems that in some cases had not been patched in months or even years. Others discovered that whilst security teams have been saying you want to open what port to the internet? firewall rules were approved allowing port 445 and in other cases even 139. Another group of users discovered that the firewall that used to be enabled on their laptop was no longer enabled whilst connected to the internet. Anyway, that was last month. On the back of it we all made improvements to our vulnerability management processes. You did, right?

    Ok, maybe not yet, people are still hurting. However, when an event like this happens it is a good opportunity to revisit the process that has failed, identify why it went wrong for you and make improvements. Not the sexy part of security, but we cant all be threathunting 24/7.

    If you havent started yet or the new process isnt quite where it needs to be where do you start?
    Maybe start with how fast or slow should you patch? Various standards suggest that you must be able to patch critical and high risk issues within 48 hours. Not impossible if you approach it the right way, but you do need to have the right things in place to make this happen.
    You will need:

    • Asset information – you need to know what you have, how critical it is and of course what is installed on it. Look at each system you have, evaluate the confidentiality, integrity and availability requirements of the system and categorise the systems into critical and less critical systems to the organisation.
    • Vulnerability/Patch information – you need information from vendors, open source and commercial alike. Subscribe to the various lists, get a local RSS feed, etc. Vendors are generally quite keen to let you known once they have a patch.
    • Assessment method The information received needs to be evaluated. Review the issue. Are the systems you have vulnerable? Are those systems that are vulnerable flagged as important to the business? If the answer is yes to both questions (you may have more), then they go on the must patch now list. The assessment method should contain a step to document your decision. This will keep auditors happy, but also allows you to better manage risk.
    • Testing Regime Speed in patching processes comes from the ability to test the required functionality quickly and the reliability of those tests. Having standard tests or even better automated tests can speed up the validation process allowing patching to continue.

    Once you have the four core ingredients you are now in a position to know what vulnerabilities are present and hopefully patchable. You know the systems that are most affected by them and have the highest level of risk to the organisation.

    The actual mechanics of patching is individual to each organisation. Most of us however will be using something like WSUS, SCCM or Third-party patching products and/or their linux equivalents like satellite, puppet, chef, etc. In the tool used, define the various categories of systems you have, reflecting their criticality. Ideally have a test group for each, Dev or UAT environments if you have them can be great for this. I also often create a The Rest group. This category contains servers that have a low criticality and can be rebooted without much notice. For desktops, I often create a test group, a pilot group and a group for all remaining desktops. The pilot group has representative of most if not all types of desktops/notebooks used in the organisation.

    When patches are released they are evaluated and if they are to be pushed they are released to the test groups as soon as possible. Basic functionality and security testing is completed to make sure that patches are not causing issues. Depending on the organisation we often push DEV environments first, then UAT after a cycle of testing. Within a few hours of being released you should have some level of confidence that the patches are not going to cause issues. Your timezone may even help you here. In AU for example patches are often released during the middle of our night. Which means in other countries they may already have encountered issues and reported them (keep an eye the ISC site) before we start patching.
    The next step is to release the patch to The Rest group and for desktops to the pilot group. Again, testing is conducted to get confidence the patch is not causing issues. Remember these are low criticality servers and desktops. Once happy start scheduling the production releases. Post reboot run the various tests to restore confidence in the system and you are done.

    The biggest challenge in the process is getting a maintenance window to reboot. The best defence against having your window denied is to schedule them in advance and get the various business areas to agree to them. Patch releases are pretty regular so they can be scheduled ahead of time. I like working one or even two years in advance.

    The second challenge is the testing of systems post patching. This will take the most prep work. Some organisations will need to get people to test systems. Some may be able to automate tests. If you need people, organise test teams and schedule their availability ahead of time to help streamline your process. Anything that can be done to get confidence in the patched system faster will help meet the 48 hour deadline.

    If going fast is too daunting, make the improvements in baby steps. If you generally patch every 3 months. Implement your own ideas, or some of the above and see if you can reduce it to two months. Once that is achieved try and reduce it further.

    If you have your own thoughts on how people can improve their processes, or you have failed (we can all learn from failures) then please share. The next time there is something similar to wannacry we all want to be able to say sorted that ages ago.

    Mark H – Shearwater

    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

    Posted on Leave a comment

    Windows Error Reporting: DFIR Benefits and Privacy Concerns, (Tue, Jun 20th)

    This please let us know.

    1. Introduction

    Recently, I was confronted with a scenario where a very suspicious Windows pop-up message was shown to a specific user on a corporate network. It was a kind of Yes/No default Windows Dialog Box that, although I cannot reveal the message content, I can assure you that it was in the context of what the user was doing on his computer at that moment.

    As we were dealing with a major incident on the same network, our first assumption was that someone had compromised that machine and was controlling it remotely through a reverse connection – the type of situation that urges for a rapid response.

    However, after a few hours hunting for any piece of malware on that machine, including operating system events, network connections, user Internet history, e-mail attachments, external devices and so on, nothing interesting was found. In fact, the evidence came from a source Ive never imagined could help me on an incident response. It came from Windows Error Reporting (WER), as described in this diary.

    1. The subtle clue

    As no malware evidence was found, we decide to get back to the drawing board, and after looking carefully at the strange message, I noticed that, whatever application had been used by the attacker to present the message, it has hanging. The classic (Not Responding) width:332px” />

    Figure 1 Not Responding application sample

    By default, when an application hangs or crashes on a Windows system, the Windows Error Reporting (WER) mechanism [1] automatically gathers detailed debug information including the application name, loaded modules and, more important, a heap dump, which comprehends the data that was loaded in the application at the time that the memory was collected. All this data is reported to Microsoft that, in turn, may provide users with solutions for known problems.

    As the application used to send the strange message has hanged, the chances are that we could find generated WER artifacts do analyze and track the supposed intrusion. Thus, our next step was looking for them.

    1. Collecting WER information

    To demonstrate how we found and analyzed WER files related to that hanged application without exposing real incident information, weve created a similar scenario and used it for this analysis.

    1. Crashing an application

    Using a Windows 10 default installation machine in our lab, the first thing was forcing an application to crash. For this purpose, we used the text editor application Notepad++ as the application to be crashed and Process Explorer tool [2] as the means to cause it.

    For further analyses purposes, we typed a simple text on the editor, as seen in Figure 2 and, through the Process Explorer, started killing aleatory ntdll.dll width:566px” />

    Figure 2 width:366px” />

    Figure 3 Killing application threads

    It didn width:401px” />

    Figure 4 width:517px” />

    Figure 5 Application event log evidence

    Note that the event ID for crashed application has the value 1000 while for hangeing applications, the value is 1002.

    The other evidence are the WER files themselves which, depending on the Windows version are generated in different paths and can be found through different control panel menu options. On Windows 7, for example, WER settings and reporting access can be found through Action Center and on Windows 8 through Problem Reports and Solutions.

    On Windows 10, used in our demonstration scenario, the WER menu can be opened through the menu Control Panel – System and Security – Security and Maintenance – width:478px” />

    Figure 6 Looking for the specific problem report

    width:531px” />

    Figure 7 WER problem details

    Another way to find WER files is going directly path they are created on the disk. On Windows 10, WER report files can be reached through the path: %SystemDrive%ProgramDataMicrosortWindowsWER width:567px” />

    Figure 8 width:567px” />

    Figure 9 WER file list

    1. Analyzing the evidence

    Now, making a parallel to the real incident case, when we searched for event log evidence, we could find that an application hanged on that machine moments before the message screenshot time. Better than that, we also could find the WER files associated to that application hang!

    You may be thinking right now how I could find WER files in the machine as they are deleted from disk after being sent to Microsoft. The point is: they weren

    <p dir="ltr">The WER report wasn width:523px" /&gt;</p>

    Figure 10 Problem uploading WER during the MITM attack

    Heading back to the real scenario, with WER files in our hands, we could discover the name of the possible application that generated that suspicious pop-up message and, by inspecting the heap dump file we could confirm it. It turns out that we found exactly the pop-up message content into the memory dump file using a simple strings command although there exist an orthodox way to inspect and debug those files using Windbg [4].

    Employing the same strings width:567px” />

    Figure 11 Evidence found

    1. Final words

    As we could see, in addition to helping Windows users to deal with application crashes and hangs, this case demonstrated that WER can be extremely useful for post-mortem analysis. Depending on the scenario, its like having an application memory dump to analyze as part of your DFIR activities without having collected it during the incident.

    On the other hand, it raises some concerns regarding data leaking through the memory dump files. Considering that you have consented to send those information to Microsoft (remembering or not that you have done that [5]), there exists the possibility of those content to be accessed by third parts, like intruders that escalated the privileges on the targeted machine or simple by that new employee that is now using your machine and you thought that removing your user home directory could be enough.

    Things may get worse if we consider that the crashed or hanged application is a password manager, for example. We did experiments on a group of them and privately reported those that allowed us to recover clear text passwords from WER memory dumps. The Enpass password manager has already published a security bulletin and a new version fixing the vulnerability [6] for which the CVE 2017-9733 [7] has been associated.

    For Windows application developers in general, to prevent sensitive information exfiltration from crash dumps, we recommend either completely disabling WER triggering by using AddERExcludedApplication or WerAddExcludedApplication functions [8] or by excluding the memory region that may contain sensitive information using the function WerRegisterExcludedMemoryBlock [9] (available only on Windows 10 and later).

    A more comprehensive solution should be provided by Windows itself that could protect report files by encrypting them – at least the memory dumps. Interestingly, there is a patent from IBM exactly about protecting application core dump files [10]. Today, the encryption is employed only while sending WER report files to Microsoft through SSL connections.

    Regarding our case, in the end, fortunately realized that there was no violation or intrusion on that machine. It was, indeed, a misuse of a legitimate tool by an internal employee that made us learn a bit more the importance of WER files to digital forensics and users privacy.

    1. References











    Renato Marinho

    Morphus Labs | | @renato_marinho

    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

    Posted on Leave a comment

    As Your Admin Walks Out the Door .., (Mon, Jun 19th)

    One of our readers (thanks Gebhard) mailed us a link to an article on what the press is apparently now calling a Revenge Wipe – a system administrator who has left the organization, and as a last hurrah, deletes or locks out various system or infrastructure components.

    In this case, the organization was a hosting company in the Netherlands (Verelox). In the case of cloud providers, a disgruntled admin may have access to delete entire networks, hosts, and associated infrastructure. In the case where its a smaller CSP, the administrator may also have access to delete customer servers and infrastructure as well. In Vereloxs situation, that seems to have been the case (from their press release at least)

    The classic example of this is the City of San Francisco in 2008), where their main administrator (Terry Childs) refused to give up the credentials to their FiberWAN Network Infrastructure, even after being detained by law enforcement (he eventually did give the credentials directly to the Mayor). Ive listed several other examples in the references below – note that this was not a new thing even in 2008 – this has been a serious consideration for as long as weve had computers.

    So, how should an organization protect themselves from a situation like this?

    Use Separation of Duties:

    Know who has access to what. Have multiple people with access to each system. Having any system with only a single administrator can turn into a real problem in the future.

    Use Authorization:

    It can be difficult, but wherever possible use Admin accounts with only the rights required. Its very easy to build an every Admin has all rights infrastructure. Its likely more difficult to build a why does the VMware admin need the rights to delete an entire LUN on the San config but its important to think along those lines wherever you can.

    Use a back-end directory for authentication to network infrastructure:

    What this often means is that folks implement NPS (RADIUS) services in Active Directory. This allows you to audit access and changes during regular production, and also allows you to deactivate network administrator accounts in one place

    Where you can, use Two Factor Authentication

    Use 2FA whereever possible, this makes password attacks much less of a threat. 2FA is a definite easy implement for VPN and other remote access, also for administration of almost all Cloud Services for your organization.

    Just as a side note – I am still seeing that many smaller CSPs have not gone forward with 2FA – if you are looking at any new Cloud services, adding Two Factor Authentication as a must-have is a good way to go.

    Deal with Stale Accounts:

    Keep track of accounts that are not in use. I posted a powershell script for this (targeting AD) in a previous story ==

    Deal with Service Accounts:

    Service accounts are used in Windows and other operating system to run things like Windows Services, or to allow scripts to login to various systems as they run. The common situation is that these service accounts have Domain Administrator or local Root access (depending on the OS).

    Know in your heart that the person you are protecting the organization from is the same person who likely created one or all of these accounts.

    Be sure that these service accounts are documented as they are created, so that if a mass change is required it can be done quickly.

    Know that these use a central directory (such as AD or LDAP), so that if you need to change them or disable them, there is one place to go.

    I posted a PowerShell script in a previous story to inventory service accounts in AD ==

    Restrict Remote Access:

    Be sure that your administrative accounts dont have remote access (VPN, RDP Gateway, Citrix CAG etc). This falls into the same category as dont allow Administrators to check mail or browse the internet while logged in as a Domain Admin or root privileges.

    On the day:

    On the day of termination, be sure that all user accounts available to our administrator are deactivated during the HR interview. If youve used a central authentication store this should be easy (or at least easier)

    Also force a global password change for all users (your departing admin has probably done password resets for many of your users), and if you have any stale accounts simply deactivate those.

    For Service accounts, update the passwords for all of these. This is a good time to be sure that you arent following a pattern for these passwrods – use long random strings for these (L33t speak versions of your company or product name are not good choices here).

    Im sure that Ive missed some important things – please, use our comment for to fill out the picture. This is a difficult topic, since many of us are admins for one thing or another this really hits close to home. But for the same reason, its important that we deal with it correctly, or as correctly as the situation allows.


    Rob VandenBrink

    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.