|Our Mission:||To reduce your personal and business risks by deriving action items from recent news stories.|
Note: Brent LaReau is your point of contact for this blog.
« Previous 10 | Next 10 »
A lot of people spin cloud computing as something that just works and is totally reliable, while at the same time saving millions of dollars. But after reading quite a few news stories I've learned that cloud computing services often stagger and lurch, causing major losses of revenue.
Almost a year ago I had blogged about how businesses who had moved their computing infrastructure into Amazon's EC2 cloud had lost data during a service outage. Amazon's Elastic Compute Cloud has suffered other failures, such as a network connectivity failure in North Virginia that lasted 25 minutes. Cloud data centers in Dublin belonging to both Amazon and Microsoft were taken out by a lightning strike that caused a power failure. Their backup generators did not keep them running. Google Docs has hiccupped several times, in one case bringing companies to a standstill for one hour who relied on this collaborative tool. Ditto for Microsoft’s Windows Live, Hotmail, SkyDrive, and Office 365 outage, which brought its users to a halt for five hours due to a network DNS issue. Salesforce.com has had many outages for years, some being especially severe. Terremark's vCloud Express service went offline for about seven hours. Zoho's cloud platform went offline for 10 hours, leaving customers unable to access Zoho's "suite of award-winning online business, productivity & collaboration applications." The list goes on.
I read that someone added up all of Amazon's cloud service outages in the past few years and calculated what it would take for Amazon to meet their stated "99.95%" uptime goal. Apparently Amazon would need to have 100% uptime (no outages) for 15 years. Ouch!
After reading all the news stories and post-mortem analysis we understand about lightning strikes and incorrect database queries and cascade failures and remirroring "storms" and massive power outages and core network device failures.
But how can we understand this latest cloud outage: Microsoft’s Azure cloud service recently went down for two days worldwide, starting on February 28th. It went down because the next day was February 29th instead of March 1st, for 2012 is a leap year.
Uh, Azure went down because it's a leap year? Calendars began to include "leap years" in 1582. So, after 430 years we still can't get it right?
Some very big organizations rely on Azure. For example, the UK government's much-touted G-Cloud service runs on Azure. G-Cloud was supposed to revolutionize their e-mail, word processing and enterprise resource planning (ERP) activities. Well, I suppose that suffering a two-day outage is rather revolutionary.
It's one thing to move one's IT infrastructure to the cloud, only to have the cloud's own infrastructure fail. But it's another thing to run one's business on third-party cloud-based software or platforms, only to have every employee stare at a totally blank screen for hours or days because something has failed at some level inside the cloud. In that case, our risk mitigation strategy is totally different.
(You can see all of my cartoons here.)
Let's inspect the facts of this predicament and outline some action items that we can use to reduce our risks when using third-party cloud-based software or platforms.
- Fact: Even something as simple as word processing can fail in the cloud and stall every employee for hours. I mentioned some examples of cloud failures previously (and those are just the tip of the iceberg!). To my mind—and people can disagree with me—it is a mistake to remove natively-installed office applications from our computers in favor of cloud apps. Why? Installing a native application (like Microsoft Excel) on a computer introduces only two basic points of failure: the computer; and the application software. A failure in either of those two things will stop you cold. However, signing up for a cloud-based office application like Microsoft's Office Web Apps, Google Docs, or QuickBooks Online introduces five basic points of failure: the computer; the web browser software; the Internet connection; the cloud-based application; and the cloud's data center. A failure in any of those five things will stop you cold.
To mitigate the increased risk of downtime due to any of these five points of failure, we can plan for these failures ahead of time and accumulate "spare parts" in case we ever need them in a hurry:
- Spare computer. This could be a desktop, a netbook, or a laptop. It could even be a tablet with a stand and an external keyboard. Even a used or refurbished unit may work in a pinch. Hardware is cheap compared to the price of being unable to run our business.
- Spare web browser. This may sound crazy, but I've seen cases where updating a browser causes problems with third-party plug-ins. Having a backup browser that is known to work with our cloud services could save the day.
- Spare network components and Internet connection. This includes the computer's network interface (whether wired or wireless); Wi-Fi access point or router; and Internet access method (DSL, cable, fiber optic; nearby Wi-Fi hotspot, etc.). Again, spare hardware is cheap.
- Spare cloud-based application. This may sound crazy too, but why put all our eggs in one basket? Knowing how to use two different vendors' cloud apps may allow us to keep working if one goes down. Of course, this idea makes three big assumptions: one is that we can religiously export our cloud data to a local hard drive every day; the second is that we can import that data into our alternative cloud app at a moment's notice; and the third is that we can export back into our original cloud app when it comes back online.
- Fact: Cloud-based application features can change at random. That's because cloud-based apps and features are upgraded "on the fly" whenever a new feature or a software update is available. Users have no control over when to start using the new version or the new feature. In some cases old features will simply disappear. Even security models and graphical user interfaces will change suddenly. Usually we are not informed ahead of time. When changes occur suddenly we are forced to slow down to a crawl or stop entirely until we figure out who moved our cheese. Having to slow down or stop our work places our businesses at risk. And it's not as if we can simply read the updated user's manual, because there is no user's manual. Searching for help in the service provider's web site (including user forums) yields multiple contradictory instructions that are all wrong because all are out of date. Users of Facebook, Google's Gmail, and other online services have already seen this happen. There is little we can do to mitigate the risk of weekly feature creep except "dumb down" the way we use these online products, and hope the simple, fundamental features we rely on every day won't change even if a bunch of optional bells and whistles keep changing.
- Fact: Cloud-based applications can disappear entirely. If you read any cloud provider's "terms of service" blurb you will see that they offer no guarantees of future availability. For example, Dropbox (a cloud storage provider) states the following in its Terms of Service:
Remember Yahoo! Briefcase, Google Health, JumpBox Cloud Gear, Google Mashup Editor, Microsoft Popfly, and the Google Translate API? Gone! There is little we can do to mitigate the risk of yearly service discontinuations except constantly watch blogs and web sites that are devoted to our current cloud services, hoping for a "heads up" if service will be discontinued. We can also constantly watch for new, alternative cloud services that we may be able to use instead. If we find a new service we can carefully assess it while still continuing to rely on our old service. Eventually we can migrate to the new service, hoping that it will stay alive for a while. This is a lot of work.
- Fact: Your cloud-based data can disappear entirely. This applies to both cloud computing services and cloud storage services. Again, refer to your cloud provider's "terms of service" blurb; you could be surprised or even angry at what you find. For example, in 2011 some users were outraged to learn—the hard way—that Uploadingit.com deletes all of a user's files if he or she does not log in for 30 days. Another storage service, Humyo, deletes all of a user's files if there is no account activity for 90 days. Apple's new iCloud service may delete iOS device backups for devices that have not been backed up for 180 days. It's important for us to realize that cloud vendors are under no obligation to retain our data even for the short term, let alone forever. And cloud outages can cause data to be lost or corrupted (remember, Amazon's EC2 cloud lost data during an outage). To mitigate this risk we must figure out how to export or back up our cloud data to physical hard drives (or to another cloud-based service that's unrelated to our main cloud-based service). And we must read and question each service provider's terms of service, and then keep up with any changes to those terms of service agreements. This is a lot of work.
- Fact: Uploading your data to cloud service providers may grant them the right to examine it, modify it, or delete it. Consider the Terms of Service for Apple's new iCloud storage service:
It's bad enough that a cloud service provider may go out of business, or may suffer an outage, or may even corrupt or lose our data, but it's worse when our service provider maintains the right to parse our data, judge it, modify it, and even delete it if they find our data to be "objectionable". To mitigate this privacy risk we need to read and question each service provider's terms of service, and choose not to use cloud services that seem to infringe on our privacy. We also need to keep up with any changes to those terms of service agreements. This is a lot of work.
Some news stories raise my level of concern even though I'm not really involved in the story's main issue. In this case, the news story was about Angry Birds. I don't play games on my mobile devices, so I've never played Angry Birds. But at least I've seen it by looking over people's shoulders. I never dreamed that a game in which we use a slingshot to launch birds at pigs would need to access our entire mobile address book.
Yet the Angry Birds software application for iOS devices (such as iPhone) makes the ABAddressBookCopyArrayOfAllPeople API call, which returns an array containing all of the "person" records in the device's address book.
Other games that would appear to have no legitimate need to access our entire address book call ABAddressBookCopyArrayOfAllPeople too. For example, "Cut The Rope" makes this call. Does feeding candy to a little green monster really require access to our entire contact list?
Digging into this mystery makes things a little clearer (but it's still troubling). Both Angry Birds and "Cut The Rope" connect to Chillingo's "Crystal" social gaming platform so game players can "Send an invite from your local contacts." This explains why these games need access to our address book, but it doesn't explain why Angry Birds and "Cut The Rope" upload our entire address book to Chillingo's web server, instead of uploading just one person's contact information. This does not seem right.
What about applications other than games? It turns out that Citibank's app calls ABAddressBookCopyArrayOfAllPeople too. Why would our bank need to access our entire address book? Several Google apps access our entire address book, too.
The recent discovery that mobile apps such as Foodspotting, Path, Hipster, Foursquare, and Gowalla invisibly access our mobile device data and even silently upload it to various web servers has sparked a huge scandal.
You might think, "It's no big deal. So what if Angry Birds accesses my address book?"
The point is not that they access our address book. The point is that they access our entire address book when they only need one person's contact information. Consider this scenario: you walk into your bank and they offer you a special deal if only you provide them with a friend's contact information. Naturally you open your phone's address book but the bank's staff grabs your smartphone and copies your entire address book. Would you just shrug and say, "It's no big deal", or would you grab back your phone and say "That's none of your business!"?
(You can see all of my cartoons here.)
Let's survey the facts of this matter and sketch out some action items that we can use to reduce our risks:
- Fact: Apple has promised a future update to iOS will require explicit user permission before an app can access the address book. This is a step in the right direction but doesn't seem to entirely mitigate the fundamental risks to our privacy. First, we have no idea how long it will take Apple to roll out this future update. Several months? Several years? Second, since many people never update their iPhone's iOS operating system, many people will continue to face this risk to their privacy long after Apple has "fixed" the problem. Third, most people will simply click "Yes" when informed that an app is requesting access to their address book. They will do so because the warning message won't say something like, "If you click 'Yes' this app will upload your entire address book to unknown third-party service providers who will sell it to data aggregators, who in turn will sell it to dozens of telemarketing companies that will bug everyone night and day until they buy something!". Fourth, Apple has not promised any updates that will require explicit user permission before an app can upload the address book to a server on the Internet.
- Fact: A utility program called "AdiOS" can tell which iOS apps are coded to call ABAddressBookCopyArrayOfAllPeople. The bad news is that AdiOS runs only on Apple Macs and assumes that our Mac contains a copy of all of the apps installed in our iOS device. It works by searching through our app files for the word "ABAddressBookCopyArrayOfAllPeople". The result is a list of apps containing that call. Windows users can accomplish something similar by searching for files containing "ABAddressBookCopyArrayOfAllPeople" in their iTunes folder tree. Of course, knowing that an app is coded to make that call doesn't inform us of when, if ever, it actually does make that call.
- Fact: There is no easy way to determine if an app is uploading our address book information. But there is a hard way. We can set up a proxy server (such as mitmproxy) between our device and the Internet, configured to capture and analyze our iPhone's Wi-Fi traffic. As if that wasn't hard enough, we need to carefully parse that traffic and determine whether it contains our address book information. This information could have been encoded into some other form that is not obvious without careful study. For example, it could have been compressed using gzip or another method.
- Fact: There is no current way to prevent an iOS app from invisibly accessing or uploading the address book. There are three ways to mitigate this risk to our privacy. One, the truly conservative person can simply avoid using all apps. Two, we can use tools and methods mentioned previously to find and remove offending apps. Three, we can keep searching the Internet to see if some researcher has used such tools and methods to perform a comprehensive survey of a wide variety of apps, in which case we can remove offending apps.
While reading the news story whose headline is shown above, I thought, Wow, students worldwide have been busy hacking recently! It has only been a few weeks since I finished my previous blog entry about a student re-routing his high school's Internet traffic.
In the U.S., people under the age of 20 have used computers and the Internet all their lives. Those under 10 have also been around mobile devices such as MP3 players, cell phones, smartphones, and Internet tablets most of their lives. Most people in those age groups are not self-proclaimed hackers but they have endless time and curiosity to explore every avenue and try every combination of settings, menu selections, or passwords until they succeed. Many devices and computers impose no penalty for failed attempts, so there's nothing to stop these kids. It's scary to see three- and four-year-old kids rapidly navigating through multiple levels of game menus quite easily even though they cannot read a word. So today, even little kids know how to blindly hack their way into simple systems.
Older students have tons of free time, plus they have acquired even more capabilities and understanding of computers, devices, and the Internet. One problem is that most teens don't see anything wrong with hacking into things. A study published in 2010 showed that 25% of teens have tried to hack into other people's online accounts. Of those, 78% admitted they knew it was wrong but did so anyway. Forty-six percent said they logged into other people's accounts for fun, but 20% did it to make money! Twenty-five percent have attempted to log into others' Facebook accounts; 18% have tried to gain access to a friend's e-mail account; and 5% said they had tried to hack into their school's web site. I'm sure teen morals have not improved since 2010.
(You can see all of my cartoons here.)
Let's probe the facts of this affair and evolve some action items that we can use to reduce our risks:
- Fact: Jewish Free School students brought down their school's web site. Some news reports state that students hacked into the school web site to take it down; other reports state that students mounted a DDOS (Distributed Denial Of Service) attack to make the site inaccessible. Let's examine both scenarios, bearing in mind that we're not talking about professional hackers here.
Let's first consider how an amateur could hack into the school's web site and take it offline. Using Occam's razor we can rule out capturing the web site administrator's password by infecting his computer with keylogging malware; or by connecting a network packet sniffer to his computer; or by using Wi-Fi sniffing tools if he logged in using FTP or HTTP (instead of SSH or HTTPS) over an open Wi-Fi access point; or by targeting him with a Cross-Site Request Forgery attack to silently establish a new password. And using an automated password-cracking tool with a good password dictionary against the web server's HTTP or FTP login port can be ruled out by Occam's razor. What remains are simple, low-tech hacks (as usual). A bunch of students may have spent two months guessing passwords until they stumbled on the correct one (remember what I said about kids having endless time on their hands). They may have simply found the password on a Post-it note in the administrator's (or his assistant's) desk drawer. They may have discovered that the password is the same as is used for every other school computer and online account (which was previously found on a Post-it note in the secretary's office). A student who happened to be standing near the web site administrator while he logged in may have gotten the password by "shoulder surfing" (perhaps through a glass office partition). The student would have likely seen this screen. The school changed to a different web site hosting provider a few days after the hack occurred, as you can see here. (Was this change related to the hack?)
Now, let's consider how an amateur could launch a DDOS attack on the school's web site. Using Occam's razor we can rule out DNS cache poisoning. We can also rule out constructing a botnet, but it's possible that students could have rented someone else's botnet to perform the DDOS attack. But since it might have taken students some time to locate a botnet, and it would have cost them perhaps $50 to $100, we can probably rule that out too. The school's web site was hosted on a Windows Server 2003 computer running Microsoft-IIS/6.0 before the hack, as you can see here. (Notice what it's running after the hack; I'm not sure if that's significant or not) . Microsoft has published a number of articles on how to harden Windows Server 2003 and IIS against denial-of-service attacks, as you can see here and here. Therefore, it's possible that a really small DDOS attack (i.e., just a DOS attack) was all that was needed to bring down the school's web site. So, a small group of students could have used Linux computers on good high-speed Internet connections to perform a simple but effective ping flood. Or perhaps they wrote a simple Python script to submit HTTP requests faster than the server could keep up with. Or used a Python script called PyLoris to overwhelm the server with simultaneous TCP connections. If the school was only paying for a low-speed Internet connection for their web site, it could have fallen prey to just one or two Linux computers on high-speed connections. Of course, the students could have also just searched on the Internet to find freely downloadable hacker tools designed to perform DOS attacks.
Now that we've pieced together some plausible explanations of how these students likely took down their school's web site, we can begin to mitigate our own risks by asking ourselves if someone on par with Jewish Free School students could take down our web site. We will try to answer this question by using Occam's razor to consider only simple, low-tech hacks as described previously. You may laugh at this low-tech approach, but the web site administrator for the Jewish Free School may have laughed too.
Let's begin with the scenario of someone breaking into our web site so that they can take it down. It's easier for a hacker to break into our web site if they know what hosting provider we're using, and what operating system and HTTP server we're using. We need to realize that anyone can learn a lot about our web site using freely available online resources. You can see this for yourself here (yes, please type in your web site's domain name to see what's "under the hood"). Also, it's easier for an insider (such as a disgruntled ex-employee) to break into our web site than an outsider, since insiders know a whole lot more about everything related to our organization. Now, we need to ask whether someone can break in by simply guessing passwords, even if it takes a week or a month? Better results can be achieved by intelligently guessing instead of randomly guessing. Analysis of the famous 2011 Sony hacks showed that 93% of people used passwords between six and 10 characters long, with 50% of these being less than eight characters. Ninety-nine percent of passwords contained only alphanumeric characters. A lot of people use trivial, naive passwords like: seinfeld, password, winner, 123456, purple, sweeps, contest, princess, maggie, 9452, peanut, shadow, ginger, michael, buster, sunshine, tigger, cookie, george, summer, taylor, bosco, abc123, ashley, and bailey. If our web site's admin is known to be a sports nut, his password could simply be the team's (or mascot's) name! We need to take a look at our web site password. Is it naive and trivial, or is it something like "m*8b+PmQE$" instead? And, is the same password used for any other login (whether physical computer or online account)? A lot of people use the same password across the board, and this severely weakens security because if someone stumbles across a password they've actually stumbled across a lot of passwords. Next, we need to ask ourselves if anyone who administers our web site might have written the password on a Post-it note and put it in plain sight or in a desk drawer where others could find it. Finally, we need to ask ourselves whether the physical layout of our offices allow "shoulder surfing".
Let's now consider the scenario of someone taking down our web site by using a DDOS or DOS attack. Again, a hacker will probably start with finding out what hosting provider we're using, and what operating system and HTTP server we're using. Again, we should take a look at such information too, so as to better understand what risks we face (yes, please type in your web site's domain name if you haven't already done so). To understand whether our web site is especially prone to DDOS or DOS attacks we can do some web searches to see if our hosting provider has already suffered such attacks. A call to their tech support department may or may not shed some additional light on this; don't be surprised if they really don't seem to know anything about DDOS or DOS attacks. Additional web searches may bring to light that our web server's operating system and/or HTTP server are known to be especially susceptible to such attacks. If so, we can discuss this with our hosting provider's tech support department, but don't be surprised if they really can't help. Even if we work with our web hosting service to tweak our web server's settings to better resist DDOS/DOS, we won't actually know how resistant our site is to these attacks until one happens. Perhaps we could find out by performing a small-scale DDOS or DOS attack ourselves, using tools I had mentioned previously. It's best to discuss this with our hosting service ahead of time, as their monitoring tools could pick up on the attack and trigger an alarm. If they think the attack is severe enough they may even notify some authorities, who could easily trace the source IP address to us. We wouldn't want to get into trouble for attacking our own web site!
- Fact: Students sent fake e-mails to parents, who thought it was sent by the school. According to one news report, a student had downloaded a program to send these spoofed e-mails. This reminds me of the time Microsoft's Bill Gates sent my wife an e-mail, accusing her of pirating Microsoft's software. My wife was in shock. I hastened to explain to her that it was I who had sent her that e-mail, and I showed her how I did it (by altering the "FROM" header in my e-mail message). She was mad at me for several days. The same technique is used by spammers to make people think their automatically-transmitted mass e-mails are sent by real people named "Adriana Rodriguez" (or etc.). To reduce our risks we need to be aware that it's incredibly easy to change something superficial like a "From:" address in an e-mail, or the human-readable part of a link (URL) in an HTML-formatted e-mail or web page. For example, try clicking this link to see where it actually takes you: http://google.com. Don't worry, it's harmless. This is not even a hacker trick; it's elementary HTML: < a href = "http://yahoo.com" > http://google.com </a>. That's exactly why we should never click a link in an e-mail, as it may take us to evilhacker.com instead!
- Fact: Students created a fake copy of the school's web site. This is not as hard as it sounds. The simplest fake copy that might fool people would be a screenshot of the school's real main page, "Photoshopped" to reflect whatever the students wanted to express. In this case the fake copy announced the school was closed on account of the recent snowstorm. But a more realistic, fully functional copy of the school's real web site can be made in a few minutes by using a utility program called "wget". This utility is already installed in every GNU/Linux box on the planet, and a Windows port can be found if you do a web search. The wget utility is an automatic download utility. If you provide wget with the URL of a web page it will happily download that web page and all of its supporting content (such as images, CSS files, etc.) and store all of these as files on your hard drive. Yes, it's quite easy for anyone to create a fully working clone of any web site. That's how phishers are able to reproduce an online bank's login page, in the hopes that people will promptly type in their username and password. If the students used wget all they had to do was modify the home page file of their copy. It's true that students would have had to learn to read HTML coding before they could alter their copy, but again, kids have all the time in the world.
OK, so the students made a copy of their school's web site and altered it. How did they get their fake copy up on the web so that parents could see it? And how did they get parents to surf to the fake copy instead of surfing to the school's real web site? First, we need to understand that it's been very easy to find free web hosting for many years now. All the students had to do is perform a quick web search for "free web hosting" and then pick one that allows users to upload their own files (instead of being limited to using the hosting provider's fixed-format templates). Signing up for an account takes just a few minutes. No money is involved because you won't have your own domain name; you'll just have a subdomain in their domain, or you'll have a folder visible in their domain. In plain English, this means your web site's URL would either be something like "your-site.hosting-service.com" or "hosting-service.com/~your-site". If you think that neither of these URLs would fool parents who may have already memorized the school's real URL, think back to my previous bullet point. Remember my "google.com" link example? After taking down the school's real web site, students sent parents an e-mail (with a fake "FROM:" address) containing a link to the fake copy of the web site. It's highly likely that the e-mail they sent was an HTML-formatted e-mail (like almost every e-mail nowadays), which means the human-readable part of the link would have been the school's real URL, but the machine-readable part would have been the fake copy's URL.
- Fact: Students indirectly demonstrated the lack of security in the school's web site. Perhaps that's why the school moved its web site to a new web hosting service a few days after the hack (as you can see here)! To mitigate our own web site risks, we need to react when we discover a weakness. For example, if we discover that someone (whether it's your spouse or your staff member) has created a naive, trivial password, we need to change it and establish a new standard for passwords to ensure this won't happen again. If we see employees—or even visitors—watching our web site administrator log in to update our web site, we need to correct that immediately.
As I read a news article about this subject, I realized that people can do some pretty amazing things with their smartphones. In this case the student had a "rooted" Android smartphone (one which has been modified to give its user complete administrative access to its underlying GNU/Linux operating system). A lot of people, even teenagers, have rooted their Android device to gain full visibility into the operating system, and so they can install powerful utility software that requires full access to system resources such as networking.
In this case the student installed an app called Arpspoof, which is a tool intended for network auditing. The Android version of Arpspoof is based on the original Unix version. Arpspoof is commonly used to intercept network traffic flowing between two other computers somewhere on the network. Such traffic is normally not visible at any randomly selected point in the network because modern network switches efficiently forward packets only along the most direct path from A to B. Tools like Arpspoof were unnecessary before network switches were invented, as hubs broadcast all network packets in all directions, so all packets were easily visible everywhere. But this made networks very busy and many packets had to be retransmitted due to collisions. "Layer Two" network switching is much more efficient.
Arpspoof's name is derived from the phrase "ARP spoofing", which is a network technique used to accomplish traffic interception. ARP is short for Address Resolution Protocol. ARP is normally used by computers on a network to locate each other. Commonly, computers will occasionally broadcast an ARP message asking "Who has IP address XXX.XXX.XXX.XXX?" and the computer assigned to that IP address will reply, "I have it! And my MAC address is XX:XX:XX:XX:XX:XX!"
Less commonly, if a computer's MAC address or IP address is changed, it will send an unsolicited ARP announcement saying, "My IP address is XXX.XXX.XXX.XXX and my MAC address is XX:XX:XX:XX:XX:XX!" That way, all other computers within hearing distance can update their ARP cache. The problem is that an ARP announcement can be a lie (stating the wrong IP address or wrong MAC address, or both) and other computers have no choice but to believe it. Basically, Arpspoof works by broadcasting lies.
If that sounds like some pretty nasty hacking, don't take it too hard. Some public Wi-Fi hotspots use ARP spoofing to redirect mobile devices to a "terms and conditions" page before allowing access to the Internet.
Here's an example of how to use Arpspoof. Let's say you want to use computer "C" to intercept traffic flowing between computer "A" and computer "B" somewhere on the network. You would run Arpspoof on computer "C" and tell it about computers "A" and "B". It would then tell "A" to send traffic intended for "B" to computer "C" instead. It would also tell "B" to send traffic intended for "A" to computer "C" instead. Technically, this would cause communications between "A" and "B" to cease, since packets are being intercepted by "C" without being forwarded to their intended destinations. Arpspoof can be configured so that computer "C" will accomplish this forwarding. The result is transparent monitoring of all traffic flowing between "A" and "B".
Obviously, configuring Arpspoof can be a bit tricky, and can result in network failure if forwarding is not set up correctly, or if computer "C" cannot keep up with network traffic. That's why it's normally used by network admins instead of high school students with smartphones. The developer who ported Arpspoof to the Android platform wrote in capital letters on his web page: "ONLY USE THIS APPLICATION ON NETWORKS THAT YOU HAVE PERMISSION TO DO SO".
Let's reconstruct the scene of this student's "crime" so that we can learn from it. According to news reports the school's IT staff claimed the student had redirected all outbound Internet traffic to his smartphone. We can assume this actually took place on the school's Wi-Fi network, not on their hardwired Ethernet network, since Android smartphones have built-in Wi-Fi but don't have an Ethernet jack.
The easiest way to intercept outbound Internet traffic on a wireless network is to pretend to be the default gateway for that network. Therefore, the student probably used Arpspoof to tell every computer on the wireless network to send traffic intended for the gateway to his smartphone instead. If the student configured Arpspoof correctly then forwarding would have taken place too, but how can a smartphone keep up with total Internet traffic? News reports indicated that network problems plagued the school for several days before things came to a head and the student's smartphone was confiscated by the police.
(You can see all of my cartoons here.)
Let's scrutinize the facts of this episode and frame some action items that we can use to reduce our risks:
- Fact: The student accomplished all of this through the school's Wi-Fi access point. Most of us see wireless networks as a way to jump on the Internet when we're in a coffee shop. But we need to understand that a wireless network is just as much a full-fledged network as an Ethernet network. We won't just see HTTP web traffic on a wireless network. We'll see a broad spectrum of traditional network protocols, such as ARP and DHCP.
Therefore, we need to treat our wireless network just like any other network. We need to be more careful in setting up a wireless network than we are in setting up an Ethernet network, because wireless networks are more complex. That means we will have more risks to mitigate with wireless networks than with wired networks. To mitigate those risks we will need to plan and document our wireless networks more carefully than our Ethernet networks. We will need to study our Wi-Fi access point's configuration settings more carefully than our conventional Ethernet configuration settings.
- Fact: The student intercepted his school's outbound network traffic. He could have easily spied on other people's Internet activities, assuming that the school was using an open (unencrypted) wireless network. We need to understand that it is much easier to intercept or copy Wi-Fi network traffic than Ethernet network traffic. That's because we don't have to unlock an equipment closet and plug our laptop computer into a managed switch's monitor port. All we have to do is place a smartphone, netbook or laptop anywhere inside the Wi-Fi network's coverage area. We don't even have to play with ARP, since we can read every packet within radio range.
Plenty of tools are available to snoop and sniff inside a wireless network. Some of these are impossible to detect because they use totally passive sniffing techniques (i.e., they don't actively become part of the network, so they are completely invisible on the network). Mitigating the risk of someone intercepting or sniffing Wi-Fi network traffic normally requires protecting network traffic via encryption. On open Wi-Fi networks we can mitigate this risk by establishing a VPN connection through the Wi-Fi network (i.e., by creating an encrypted tunnel).
- Fact: The student shut off his school's Internet traffic. We don't know if this outage was due to his smartphone being unable to keep up with total network traffic, or was due to the student failing to establish forwarding when configuring Arpspoof. Basically, the student performed a denial of service attack on the school's wireless network, whether he intended to or not. We need to consider that a copycat student, disgruntled employee, or ex-employee could do the same thing at our school or company. Defenses against ARP spoofing are described below.
- Fact: Mobile devices can cause real trouble in an organization. This brings up the subject of "Shadow IT", which is where IT devices, systems and third-party services are used inside organizations without approval. A trivial example of Shadow IT is when a tech-savvy employee sets up a new computer for testing purposes, not knowing that it has the same IP address as another computer on the network (resulting in IP address conflicts). Most companies discourage or prevent random software from being installed by employees on company computers. But an employee can bloody well install any of the 450,000 apps he wants to on his own smartphone. He can connect it to the company's network the same way he connects his company-provided laptop computer.
Pretty soon he's opening company spreadsheets on his smartphone, syncing his company e-mail to it, merging his Outlook calendar with his Google calendar, using a Remote Desktop client on his tablet, downloading "torrents", and has turned his Android device into a rogue Wi-Fi hotspot. Shadow IT caught a lot of companies by surprise. Some companies have tried to clamp down hard, while others have tried to embrace "BYOD". One way to mitigate the risk of employee-owned devices being connected to the company network is to at least be aware of what devices are connected. This can be done by automatically monitoring ARP messages, and comparing their MAC addresses to a known list. New MAC addresses appearing on the network can trigger investigation. And MAC addresses can be looked up to determine device type. Also refer to the notes on defenses against ARP spoofing, below.
- Fact: A lot of powerful utility apps are available for "rooted" or "jailbroken" smartphones. Most people live within the factory-original constraints imposed by their smartphone or tablet vendor, which keeps them from causing real trouble. But those who have "rooted" or "jailbroken" their phones can run a LOT of network tools besides Arpspoof. We need to be aware that anyone walking in the door (whether student, employee, or visitor) could be mapping our wireless networks from their coat pocket.
- Fact: The student knew a lot about computers, software, and networking. Otherwise he couldn't have gotten as far as he did. We need to understand that today's teens grew up with computers, mobile devices, and the Internet, and have had endless time to explore every nook and cranny. We must realize that as these teens enter the workforce, "Shadow IT" will become an even bigger issue.
- Fact: Defenses against ARP spoofing exist but these require installation and configuration. I'll bet the high school's IT staff wished they had gotten an automatically-generated e-mail the minute Arpspoof meddled with their network. This is possible with a full-blown network Intrusion Detection System (IDS) or Intrusion Prevention System (IPS). But on a smaller scale, several software packages are available that listen for ARP messages and notify administrators when an ARP entry changes. Some free open source Linux software to accomplish this are "arpalert" and "arpwatch". I have also read that a fairly new Snort plug-in is available to detect ARP spoofing.
I remember reading about the controversy surrounding this topic. One camp had maintained that some industrial control systems are connected to the Internet, while another camp had claimed that these systems are NOT connected. Both camps had mostly anecdotal evidence to support their claims.
What are industrial control systems, and why should we care about whether these are connected to the Internet or not?
Industrial control systems are electromechanical systems used everywhere in industrial sectors and critical infrastructures to monitor and control physical processes. Such as electrical power generation, factory assembly lines, water purification, food and beverage packaging, etc. Lots and lots of these systems are deployed worldwide.
In short, industrial control systems run the world. It would be pretty bad if hackers, hacktivists, or even teenagers broke into industrial control systems. That's why these systems are generally located behind locked doors. Some facilities also have security guards. Vendors of industrial control system components state that these systems should be protected with an "air gap", meaning that their internal communications network should NOT be connected to any outside network. Public statements by owners of these systems repeat their vendors' sentiments, that their systems are "off the grid".
But in the Internet Age the average person probably assumes that everything from Coke machines to skyscrapers can be found on the Internet somewhere. There is an increasing and somewhat disturbing trend towards constructing an "Internet of Things" where billions of things like parking meters and wind speed sensors are connected to the Internet. The rising adoption of IPv6 will make this dream—or nightmare—possible. And the increasing complexity of all systems, combined with decreasing project schedules plus a shortage of highly skilled tech workers, all conspire to put industrial control systems on the Internet after all.
Conservative experts will say, "Why on Earth would you place an industrial control system at risk like that?" But confident experts will say, "Anything is safe if you know how to do it right!" (We assume that "safe" means the use of encrypted communications (VPNs) and strong passwords, at least.) But all of these expert discussions missed the point altogether, as they did nothing to answer the $64,000 question: ARE ANY OF THESE SYSTEMS ACTUALLY ON THE INTERNET?
Finally, a university student named Eireann Leverett has answered that question. And you're not going to like the answer. Leverett located 10,358 industrial control systems on the Internet. Furthermore, only 17% of these asked for a password when he tried to gain access.
Leverett's efforts have already closed some of these security holes. One industrial control system vendor, after finding out about Leverett's findings, told some of its customers that their systems were found online. Several customers responded that they weren't even aware of that fact.
(You can see all of my cartoons here.)
Let's peruse the facts of this issue and devise some action items that we can use to reduce our risks:
- Fact: Free access to critical systems on the Internet is dangerous. Those of us who have industrial control systems (or any other type of critical systems) on the Internet are currently at risk. We need to take immediate action to not only password-protect those systems, but also to place them behind VPN gateways or disconnect them altogether. We also need to determine if hackers have already got in.
- Fact: Mistakes do happen when configuring systems, networks and devices. It's doubtful that thousands of control system engineers and IT personnel deliberately put 10,000+ systems on the Internet, or deliberately disabled passwords on 1,700+ systems. It's more likely that understaffing and tight project schedules resulted in these "mistakes" (also known as "shortcuts"). If some of these systems were in fact deliberately put on the Internet it's likely this was intended to be "temporary" during the critical installation and "go-live" phases. Their intentions may have been to air-gap the systems afterwards.
- Fact: A search engine called SHODAN can find control systems and devices on the Internet. With SHODAN there is no reason to guess about whether our systems are on the Internet or not. We can reduce our risks by using SHODAN as Leverett did. SHODAN allows searches to be filtered according to city, country, latitude/longitude, hostname, operating system and IP address. If we find one of our systems on the Internet we can take appropriate steps to disconnect it or put it behind a VPN gateway. We can also determine if hackers have already got in and take necessary steps to clean up that mess.
Drive-by downloads are not a new phenomenon. Even back in 2008, Symantec's Norton Community Watch observed more than 18 million drive-by download infections.
I had also mentioned that drive-by download attacks only work on specific combinations of popular operating systems, web browsers, browser plug-ins, application software, and runtime environments. But the distribution of popular software changes every year because users will upgrade to the latest Microsoft Office, or will upgrade their Flash Player because their favorite video web sites demand it. Therefore, criminal software developers who wish to continue distributing malware via drive-by downloads will alter their delivery mechanisms to keep up with such changes.
(You can see all of my cartoons here.)
Here are some fairly recent statistics concerning what software is currently being attacked successfully. These statistics were generated by CSIS Security Group A/S and published by Help Net Security:
- During successful attacks monitored by CSIS, 66% of victims were using the Internet Explorer web browser; 21% were using Firefox; 8% were using Chrome; 3% were using Safari; and 2% were using Opera. CSIS states that Internet Explorer is frequently abused by drive-by downloads, which is visible in these numbers. But these numbers are a bit "fuzzy" since they also must reflect relative market shares of various browsers.
- During successful attacks, 41% of victims were using Windows XP; 38% were using Windows Vista; 16% were using Windows 7; 3% were using Windows Server 2003; and 2% were using Windows 2000. It appears this distribution only reflects the relative market shares of Microsoft's products, and does not illustrate which OS is more vulnerable, as CSIS did not state otherwise.
- 37% of successful attacks targeted the Java Run-time Environment (JRE), including the Java Deployment Toolkit.
- 32% of successful attacks targeted Adobe Reader or Adobe Acrobat.
- 16% of successful attacks targeted Adobe Flash Player.
- 10% of successful attacks targeted Internet Explorer (including its MDAC, WMI Object Broker Active-X, and HTML rendering engine (MSHTML)).
- 3% of successful attacks targeted Windows Help & Support Center (via its HCP protocol).
- 2% of successful attacks targeted Apple's Quicktime player.
Let's review the facts of this topic and plan some action items that we can use to reduce our risks:
- Fact: Most of the successful drive-by downloads monitored by CSIS attacked old versions of software. Simply updating Java, Windows (including Internet Explorer), Adobe Reader, Adobe Acrobat, Adobe Flash Player, and Apple's Quicktime player would have stopped the vast majority of attacks monitored by CSIS. In some cases these security updates have been available for years! Therefore, to seriously reduce our risk we really need to stay on top of security updates.
- Fact: Third-party application software is now more widely attacked than Windows. CSIS' statistics show that Java, Adobe Reader, Adobe Acrobat, and Adobe Flash Player software are prime examples of this fact. Their statistics are telling us that we absolutely must keep our third-party application software updated. This is a bit of a problem for us. Unlike GNU/Linux distributions, which publish updates for all of their third-party packages along with their core operating system components at the same time, Microsoft does NOT update third-party software such as Adobe Reader. So we must literally update each third-party software separately, which is very time-consuming. And if we forget to update a third-party software package we could dramatically increase our risk.
- Fact: Victims were using Windows Vista or Windows 7 during 54% of successful drive-by downloads monitored by CSIS. Both Vista and Windows 7 have Microsoft's User Account Control (UAC), which is a security infrastructure that Windows XP doesn't have. We have to wonder why the UAC didn't stop these attacks. Either the malware never triggered the UAC, or users just blindly clicked "Yes" each time the UAC asked them if they wanted to allow "programs" to run. Either way we stand to learn from this. If the UAC never attempted to stop the malware (due to escalation of privilege or a security hole in the UAC), then we cannot rely on the UAC to provide sufficient security, and we need to continue to rely on additional layers of protection (as I described in my previous blog entry on malware). On the other hand, if users defeated the UAC by blindly clicking "Yes" each time the UAC warned them, then we can rely on neither users nor the UAC to provide sufficient security, and we need to continue to rely on additional layers of protection. Hmmmm...
- Fact: Java continues to be a huge, vulnerable target. This isn't the first time I've heard about hackers planting malware in computers by exploiting security holes in Java. It's ironic that many people have Java installed on their computers yet they never use any applications that require Java, and never access any web sites that require Java. But malware will use their Java to infect their computer. One way for us to mitigate this huge ongoing threat is to simply uninstall Java (if we don't need it), and be done with it. Those of us who need Java's application runtime environment, but do not need Java for any web sites we visit, should disable Java in our web browsers to mitigate drive-by download risks.
- Fact: Three Adobe software products continue to be huge, vulnerable targets. Specifically, Reader, Acrobat, and Flash Player. For several years now, vulnerabilities in these SAME Adobe products have continued to be exploited by hackers to plant malware in Windows computers. It would be nice if we didn't really need these products, but the world has totally adopted these products for its web sites and documents (much to Adobe's delight, I imagine). So it is hard for most people to simply uninstall these products and be done with it.
I have managed to accomplish this feat because I run my business on Linux, not Windows. All major Linux distributions include the authentic Linux ports of Adobe Flash Player and Adobe Reader (although I usually use native GNU/Linux PDF viewers by default). Therefore, I have removed Flash Player and Adobe Reader from all of my Windows boxes. I do not need Acrobat as I publish all of my PDF files on Linux using LaTeX or OpenOffice. Those who rely solely on Windows should consider uninstalling Adobe Reader and installing Foxit Reader instead. To mitigate Flash Player risks, Windows users should consider surfing the web only using Firefox with the NoScript extension, which blocks all Flash content by default. Individual Flash movies (such as YouTube videos) can be unblocked at will.
While reading a news report about yet another cloud services outage, I wondered if clouds are gathering on the horizon. In the context of cloud computing this expression can be taken two ways. People who say the glass is "half full" would say this expression refers to the massive adoption of cloud computing, which they think is a good thing. Others who say the glass is "half empty" are referring to storm clouds heading our way, which would be a bad thing.
Businesses that lost data during the most recent Amazon EC2 cloud service outage probably thought the glass was definitely half-empty. Especially after receiving this friendly e-mail from Amazon:
So, Amazon's cloud hardware failed; they couldn't recover customer data correctly; they generated a dump of hopelessly mangled data; and they fully intended to charge customers for storage of that useless data dump.
Amazon's "99.95%" Service Level Agreement (SLA) allows for its cloud services to be inaccessible for a little more than four hours per year. Apparently Amazon won't meet its SLA this year; some of its customers were down for three days, and Amazon had suffered a previous outage the week before. I'm not sure if Amazon's SLA covers data loss, but there is a big difference between losing computing time (money) and losing data (MONEY).
You can check on the status of Amazon's cloud services in real time here.
Let's consider the facts of this incident and delineate some action items that we can use to reduce our risks:
- Fact: Cloud computers are made of hardware and software just like traditional computers. We need to stop saying "cloud" and start saying "data center" instead. A "cloud" cannot be brought to its knees by a storm, human error, or component failures. But "clouds" don't exist. "Data centers" do exist, and these can be brought to their knees. This change in terminology will ground us in reality and will clarify our overall cloud computing strategies and tactics.
- Fact: Cloud computing has the same risks as traditional computers. If you're not sure of this fact, talk to some of Amazon's cloud services customers. To mitigate cloud-related risks to our businesses, we need to view the cloud in exactly the same way as all previous computer systems we have used in our businesses. We need to plan for failures. We need contingency plans. We need backups and configuration management. We need trained staff to manage and debug our systems. Traditional "business continuity" activities still apply.
- Fact: Cloud computing risks have to be managed just like for traditional computers. Here's an example. In traditional computer systems it's important to achieve redundancy, such as with a multi-node cluster, a backup generator, extra T1 lines, etc. We pay extra for this duplicate infrastructure. Likewise, Amazon recommends that customers create two instances of each server running a workload, with each one located in a different "availability zone". An availability zone can be a separate section within a data center, with separate power feeds and network connections. It can also be a different data center located in a different city or state. It's best if both server instances are "mirrored" (having identical software, configuration, and data). Obviously, we pay extra for this duplicate infrastructure, just like for traditional computers. Plus during the (constant) mirroring process we also pay to move each byte of data from one availability zone to another. All of this could double the cost of cloud computing. The bad news is that due to weird coincidences, what fails in one availability zone may also fail in other zones, bringing us down anyway despite redundancy. That's what happened in Amazon's case: four availability zones went down for the same reason.
- Fact: Cloud computing has existed for only five years. It's true that Amazon started using its own cloud services internally in 2002, and that more than five years ago other companies achieved some successes (and failures) in what would now be called "cloud computing". But if we look at the birth and rise in popularity of all the "me too" cloud services, it all happened in the past five years. We need to realize that it's still early days. Interoperability and industry standards don't exist yet for cloud computing. We need to design everything from scratch for every cloud service provider we choose. Changing providers midstream can mean reinventing the wheel: a very expensive process!
- Fact: It's harder to learn how to manage risks for cloud computing compared to traditional computing. That's because many experts with 20+ years of experience with traditional computing are available for hire. By contrast, few people are available who know anything about cloud computing, as all the expertise in this field is already employed at cloud service providers' facilities! Plus, these experts have less than five years' experience. As if the "technical resource" obstacles weren't bad enough, we cannot even get our hands on our cloud computer or its network equipment. Instead we have to work indirectly, through obtuse remote management tools and customer support personnel.
- Fact: Failover is neither built-in nor automatic in a cloud. If we do set up two server instances in our cloud, mirrored or not, we will need to manually switch over from one to the other. Depending on circumstances we may need a business partner who can redirect our cloud's Internet traffic from one server to another via the Internet's Domain Name System. This kind of switchover is not quick: there could be a 5-10 minute pause before Internet traffic is redirected from one server to the other.
- Fact: It's not "if" a cloud failure will occur; it's "when" a failure will occur. We need to assume that everything in the cloud is going to fail, and then plan how to mitigate that by setting up failover paths to another cloud instance, or to a conventional (non-cloud) data center. This requires some serious system-wide design tactics to ensure that everything can still run after failover occurs. The failover system must be tested in advance of an actual failure. And, depending on the testing burden, the failover system should be re-tested once a month or once a year.
I've been watching the malware scene for years, but the news headline shown above really floored me. For those unfamiliar with the term, malware is short for malicious software, and it includes viruses, worms, Trojans, rootkits, and other threats. Most people call malware "viruses" even though viruses are just one kind of malware.
Most computer users have felt the effects of malware, or have heard someone's horror stories. That's why this type of software is called "malicious". Everyone knows that malware can ruin our day. Sometimes malware just quietly overrides our browser's home page, or inserts spam web pages into our surfing stream so that we see advertisements for p**nography or pharmaceuticals. In other cases malware can use our computer to silently send thousands of spam e-mails per day. Or invisibly steal our bank account passwords. Or unobtrusively make our computer a member of a botnet, under the remote control of someone in another country who can then use our computer for whatever he or she wants. Such as performing a Distributed Denial Of Service (DDOS) attack on mainstream web sites such as those belonging to the CIA and Twitter (yes, these web sites have been attacked). All of these criminal activities are profitable, which today is the only reason disreputable software developers create malware.
You may be thinking, "Brent is way off base. Most people's computers are NOT infected by all this malware, because their computers work just fine. Besides, anti-virus will protect them. What's the big deal?"
With all due respect, the facts do not support those opinions. Here's why:
- Unobtrusive malware is more profitable. Think about it. If you were a criminal software developer whose malware ravaged users' computers, wouldn't they know it right away and take steps to remove it immediately? How could installing your malware on people's computers be profitable under those circumstances? Therefore, today you will find that most malware is designed to be fairly well-behaved and unobtrusive. CPU and resource usage are minimized to prevent a noticeable slowdown. Spam e-mail transmissions are throttled back, or timed to occur at night, to allow fairly normal web surfing and multimedia download speeds. People don't notice any symptoms of infections, so they don't react to the malware, and criminals can reap the most profit.
- Most people don't "look under the hood". How many computer users have ever used the NETSTAT command to see what network connections exist? They might be surprised to see their computer is connected to a botnet command-and-control server in Romania! Of course, they wouldn't know it was in Romania because most computer users don't know how to geo-locate an IP address either. How many computer users have ever checked to see what background processes are running on their computer? Or checked to see what software is automatically started when their computer boots up? Or used a network packet sniffer to analyze their computer's network traffic during the night when they are asleep?
- Anti-virus-resistant malware is more profitable. Sorry to tell you this, but yes, some malware quietly shuts down or disables your anti-virus software so that the malware can run. How is this possible? First, consider that most people use popular, free anti-virus they download from the Internet. Criminals can download the same popular, free anti-virus too, and then figure out how to write software that will shut it down once their malware has gained a foothold in a computer. The real trick is how to evade the anti-virus while shutting it down. This is done most simply by making sure the malware is unknown to popular, free anti-virus software on the day the malware gains a foothold in a computer. More about this below.
- Malware doesn't need to live long to be profitable. Let's say you're a criminal software developer who writes malware designed to send spam e-mails. You know you can infect a bunch of computers, but you also know that anti-virus updates will kill your malware hours or days after it infects any given computer. Would you decide that anti-virus has won, and give up writing malware to become a used car salesman instead? Or would you decide to send spam for as many hours as you can on any given computer, and make up the loss by infecting new computers just as fast as malware is being killed on already-infected computers? What is the difference between sending spam from the same 100,000 computers per day, and sending spam from a different set of 100,000 computers every day?
Unfortunately, the number of malware released into the world each year is not constant. It has been increasing by leaps and bounds each year. I remember being surprised when the number of known malware samples finally topped one million, not too many years ago. But 20 million new malware were created in just the first 10 months of 2010! As of November, 2010, PandaLabs' malware database had 60 million entries!
Since malware is good for criminals and bad for us, we need to ponder the facts about malware and elaborate some action items to reduce our risks. First, the facts:
- Fact: The number of malware is increasing exponentially over time. We need to understand that software developers don't produce two million new pieces of malware per month because it's more profitable than producing, say, one million new ones per month. They produce this staggering amount each month because anti-virus companies cannot keep up with this deluge. Computers will be infected when anti-virus cannot keep up with new malware, and the software developers won't get paid unless their malware will actually infect computers. Therefore, software developers create as many new malware per month as is required to keep ahead of anti-virus and actually infect computers. Basically, this means traditional anti-virus is dead; we can no longer rely on anti-virus to protect our computers. Does this mean we shouldn't use anti-virus at all? No, that's not what I'm saying. We still need anti-virus to combat the 60 million known pieces of malware. What I'm saying is that to protect our computer from brand-new malware we need to put into place additional layers of protection that most people don't have (more about this below).
- Fact: Malware is designed for the masses. If you were a criminal software developer whose paycheck depended on successfully infecting computers, what kinds of computers would you try to infect? Hundreds of millions of fairly identical off-the-shelf Windows computers that boot up directly into an administrator account and run Internet Explorer (with fairly typical free anti-virus software and fairly typical free software firewalls)? Or would you prefer to target hundreds of different kinds of custom-configured computers that boot into limited user accounts, run alternative web browsers, run unusual anti-virus software, and/or run hardware firewalls? Most criminals attack quick, typical targets (the "low-hanging fruit") instead of time-consuming, unusual targets. Therefore, we need to use some additional layers of protection that most people lack, which will convert our generic computer and network configurations into unusual, complex, and diverse configurations (more about this below).
- Fact: Malware is increasingly targeting application software instead of Windows itself. Eight years ago, Microsoft's Bill Gates sent out a memo establishing their Trustworthy Computing Initiative, which eventually spawned their Security Development Lifecycle. Over time this improved the security and stability of Microsoft's operating systems. The result? Criminal software developers shifted their focus from vulnerabilities in Windows to vulnerabilities in application software (such as the QuickTime Player) that run on Windows. Why? Because most companies that produce application software don't have a "Trustworthy Computing Initiative" or a "Security Development Lifecycle". So, it's now easier to attack, say, Adobe Reader than Windows itself. What is the purpose of attacking application software? To gain access to Windows itself, for the purpose of planting malware in a computer. In the past few years, vulnerabilities have been discovered in FireFox, iTunes, QuickTime, Skype, Adobe Reader, JRE, Flash Player, WinZip, AOL Instant Messenger, Windows/MSN Messenger, Yahoo Instant Messenger, Bitdefender anti-virus, and RealPlayer.
Here is a brief summary of how attacking an application program like, say, WinZip, can infect Windows. First, a criminal software developer needs to figure out how to create a specially corrupted ZIP, RAR, or other compressed file that would overflow some convenient data storage variable inside some version(s) of WinZip, resulting in what is commonly called a "buffer overflow". Buffer overflows can cause a program to crash in interesting and predictable ways. Second, the criminal developer needs to figure out how such a buffer overflow can result in the execution of "shellcode" he has buried in the corrupted ZIP file. Shellcode is primarily used as a steppingstone to download and install other malware from the Internet. Third, the criminal needs to figure out what malware to install in a victim computer once his shellcode executes. After he has answered all those questions he can upload his specially corrupted ZIP file to web servers that host drive-by downloads, or he can distribute his ZIP file via spam e-mails. Then he can sit back and wait for results as his attack begins to succeed.
- Fact: Anti-virus updates always lag behind new malware. Years ago, very few new malware were released into the world each year and it took many days for the malware to spread worldwide. Anti-virus companies had sufficient time to develop and distribute a new anti-virus signature before the new malware had spread very far. Computer users had several days to install the new signature before the new malware would likely hit them. But as the years went by, more and more new malware were released each year, and more and more computers were connected to the Internet (which spread the malware across the world faster). These factors caused computer users to have FEWER days to install a new signature before the new malware hit them. Over the years the number of days was reduced to ZERO, which prompted security researchers to invent the term "zero-day threat". This meant malware reached people's computers zero days after new anti-virus signatures became available.
In recent history the number of days has actually become negative, which means the malware hits our computer hours or days before new signatures become available. But instead of calling such threats "negative-day" we still use the term "zero-day". It's important for us to know what this means: ALL anti-virus software is useless against NEW malware for a few hours or a few days until anti-virus updates become available for the new malware. In simple terms, this means our computer is always completely unprotected against all brand-new malware. Please read that last sentence again. Should brand-new malware actually hit our computer just after being released, our computer will be infected. Again, we need to use some additional layers of protection that most people lack (more about this below).
- Fact: Heuristic anti-virus detection methods don't solve the zero-day problem. In recent years, anti-virus companies have tried to address the "zero-day" problem by incorporating heuristic methods to supplement old-fashioned anti-virus signatures. Such methods attempt to detect previously unknown malware without requiring a new signature. We need to know that such heuristics are only partially effective. How can I say this? Because software developers wouldn't bother to produce two million new pieces of malware per month if heuristics combined with old-fashioned virus signatures were completely effective. There would be no profit in that. And, anti-virus heuristics are deliberately tuned to prefer false negatives over false positives, which means heuristics are less useful in practice than in theory. Why are false negatives preferred? Because too many false positives would result in a storm of customer complaints and people would use another anti-virus instead. While heuristic methods do help somewhat, we still need to use some additional layers of protection (more about this below).
(You can see all of my cartoons here.)
- Fact: All anti-virus software is not equal. We need to know that for any given piece of malware, some anti-virus software will detect it, and some won't. And some will detect it sooner than others. And some will never detect it, ever. This can best be demonstrated by exploring virustotal.com. This free web site has approximately 40 well-known, mainstream anti-virus software packages installed, which are updated constantly (24/7). VirusTotal can be used two ways. First, we can upload an infected (or possibly infected) file to them, and they will scan it with all of their up-to-date anti-virus software engines simultaneously. We will receive the results in a minute or two. Second, we can explore their vast database of previously-scanned files. Either way, we will quickly find that for any given piece of malware, some anti-virus products will detect it, and some won't. It's fun to print the results, wait a day, and then resubmit the same file to be scanned again. You might be surprised to find that only 8 of 40 anti-virus products will detect it the first day, but 22 will detect it the second day, and perhaps 37 will detect it the third day. But even after a week you will see that a few (and sometimes more than a few) anti-virus products simply won't detect the submitted file as malware. Before we leap to conclusions about which anti-virus software is "better", we need to be aware that the next malware sample we upload may yield totally different results. The anti-virus product that never detected malware sample "A" may be the first to detect sample "B". To mitigate this maddening lack of consistency, we need to use some additional layers of protection as described below.
- Fact: Installing multiple anti-virus products is better than installing only one. From the previous discussion about VirusTotal we should realize that scanning with a lot of different anti-virus products will more likely identify an infected file as malware than will scanning with just one product. Each anti-virus scanner we have is an additional layer of protection. But before we rush out to purchase a dozen anti-virus products, we need to know that this is very expensive and will also fail to work out unless we are very clever (more about this shortly). It will fail for two main reasons. First, the vast majority of anti-virus products are NOT designed to coexist with other anti-virus products. Installing the second one may cripple the first one. Installing the third one may render our computer inoperable. Second, each anti-virus uses a chunk of CPU time, memory, and hard disk space. Installing more than one will bog down our computer by consuming too much CPU time, or too much memory. One way around these problems is to install two types of anti-virus scanners that don't conflict with each other.
I'm sure most people are aware of the first type, the real-time anti-virus scanner, because this is the normal kind that everyone uses. But some people may be unaware of the on-demand type of anti-virus scanner. Unlike real-time scanners, on-demand scanners don't constantly watch every byte of data flowing through a computer in real time. Indeed, they won't even run unless we start them up. And then they still won't watch every byte of data flowing through the computer in real time. Rather, they will patiently scan the computer's memory, registry, system files, cookies, and (if we desire) every byte on the hard drive. And they can do this while the normal real-time anti-virus scanner is working, without conflict. But of course, the more stuff running on a computer, the more bogged it will be. That's why it's best to use an on-demand scanner during off-peak hours. One final note about on-demand scanners: some of these can be run from a Flash drive without being "installed" on a computer. I enjoy using these "portable" anti-virus versions, since I can carry these with me everywhere.
- Fact: Most malware enters a computer through its web browser. We need to know that malware keeps changing every year, and the way malware infects computers keeps changing, too. In "ancient" times, all malware (specifically viruses) entered computers through files on floppy disks. Some time later worms entered computers through open network ports (before the massive deployment of modern firewalls that block unnecessary ports). Today, malware can enter a computer many different ways not only because the malware industry has grown, but because there are many more ways to get data into and out of computers. But in reality most malware infections actually enter our computer through the web browser. Just a few years ago, using our browser to check our web-based e-mail account could cause malware embedded in an e-mail message to be downloaded to our computer. But this is less common today, as web-mail providers typically perform an anti-virus scan on all e-mails and attachments. So criminals who desire to infect computers via web browsers have largely shifted their focus from infected e-mails to "drive-by downloads". A drive-by download is simply extra, invisible, malicious content that criminals insert into ordinary web pages.
Therefore, the vast majority of malware is currently compiled to run on popular versions of Windows. In the past few years several pieces of malware were finally compiled to run on Mac OS X. This created quite a stir and some Macs were infected, but current OS X malware is thought to be more experimental than mainstream. An increasing number of malware are being compiled to run on Android mobile devices, so we need to watch this carefully as it is unlikely that effective anti-virus scanners can run on mobile devices due to extremely limited resources. We haven't seen any malware compiled to run on Apple iOS mobile devices yet (except for jailbroken devices). We also haven't seen malware compiled to run on desktop Linux operating systems yet, as their global market share remains fairly constant at just a few percent. The fact that malware is compiled to run only on certain popular operating systems should clarify our malware mitigation strategy (more about this later).
- Fact: Firewalls do NOT stop malware from entering a computer via the network. Actually, I could manufacture a firewall that DOES stop malware from entering via the network. It would be an empty box with dummy Ethernet connectors on each end, so that absolutely no packets could flow through the box. But real firewalls DO allow specific network traffic to pass through. Since malware can easily be embedded in ordinary network traffic like database communications and web pages that pass right through firewalls, it's pretty clear that firewalls don't stop malware. Does that mean firewalls are useless? No, that's not what I'm saying. Firewalls are absolutely required to provide isolation except for specific types of traffic. A worm on the Internet seeking TCP port 139 will bounce off a firewall that is configured to block port 139. Anyone who has put a packet sniffer directly on the Internet knows that ping sweeps, port scans, and OS/service fingerprinting attempts occur 24/7. These would succeed brilliantly without a firewall. Therefore, a firewall is an important part of our total risk mitigation strategy, but it's not really intended for malware prevention.
- Fact: Application software has the same privileges as the user who opened it. Traditionally, home users have booted Windows directly into an administrative account without even knowing it. The same was true for many company employees. Even if people had to log in, they typically logged into an admin account. This is very dangerous. When we start software it almost always has the same privileges as us. If we're logged into Windows as administrator then the software we open has full admin privileges. Therefore, Internet Explorer or Adobe Reader (etc) could write files into Windows' System32 folder or even the registry if it wanted to. Now, consider the drive-by download scenario I described previously. What better way to infect a computer than to do it with full admin privileges? To mitigate this risk we need to open programs with limited privileges, not full admin privileges. There are three ways to accomplish this. First, we could log into a limited Windows user account and then open our software. Second, we could log into the admin account as usual, but find some way to open our software as if we were another user who has limited privileges (the "Run As" method). Third, we could log into the admin account as usual, but use a utility program to reduce our privileges whenever we open software (the "drop my rights" method).
I have used all three methods, and all three have drawbacks. However, the first method is clearly the best, where users log into limited user accounts. This method has three main drawbacks. First, to install software or change settings we will need to log in as administrator and then log back in as a limited user. All that logging in and logging out is a nuisance, and criminal hackers know this! Second, some software will suffer strange problems when run in a limited user account. Especially older software, which was developed back in the day (like 15 minutes ago) when most people logged into Windows only as administrator. It takes special care to deploy software in a multiuser environment. Third, if we have previously used Windows only as an administrator but suddenly we have logged into a new, limited user account, it won't take long to notice that everything we know and love is missing! Everything from wallpaper to music files is gone. Well, it's not gone, we just have to move everything from its old home to its new home. This is not easy. I have done this with two other people's personal computers and I'm not looking forward to a third. (No, I won't help you to move your iTunes so it works in a limited user account. Been there and done that!)
- Fact: Flash drives are the new floppy disks. Everyone carries one. School kids have flash drives shaped like race cars. Women wear heart-shaped pendants that are actually tiny flash drives. Criminal hackers know this, so they create malware (similar to traditional worms) that specifically target flash drives. Of course, there is little difference between a flash drive and a camera's SD card, so malware creeps into our cameras too. And the next time we plug our camera or flash drive into someone else's computer, guess what happens? The malware spreads, usually due to wonderful Windows features known as AutoRun and AutoPlay. These "convenience features" either will automatically run software programs (like malware) that Windows discovers on flash drives, or will kindly ask if the user wishes to allow such programs to run. And of course most people will instantly click "Yes" or "OK" without even reading the warning. The end result is that any malware on the flash drive will install itself on the computer. Almost every USB device, from digital photo frames to MP3 players, has been found to carry malware (sometimes fresh from the factory).
Therefore, to mitigate this risk, it is imperative to disable AutoRun and AutoPlay. Unfortunately, Microsoft has made this difficult due to contradictory instructions and confusing documentation, so it will take some time to learn about this, and we will need to prove that our configuration changes actually work as expected. Other ways to mitigate the risks of flash drives include scanning them with an absolutely up-to-date anti-virus before plugging them into our computers (which is usually impossible); never using them at all (which is usually impossible); never sharing them between computers (which is usually impossible); and only using flash drives with Macs or GNU/Linux computers (which is usually impossible). Now you can see why criminal hackers like flash drives. They like Microsoft, too.
MALWARE ACTION PLAN --- To reduce our risks related to malware, we need to add many additional layers of protection as follows:
- Update anti-virus as often as possible. Some anti-virus software permits the user (or corporate IT staff) to choose how often to update anti-virus signatures. You may be surprised to find your anti-virus has a default setting of "once per week"! This is far, far too seldom to do any good at all. To lessen our risk we should choose the smallest possible update interval (once an hour, if possible; once a day otherwise). In the enterprise we can specify a staggered or random update cycle to prevent every computer in the building from flooding the network with simultaneous update requests.
- Use a custom anti-virus configuration. We must not rely on the anti-virus vendor's default configuration. We need to take time to really read and understand each and every configuration setting available for our anti-virus. You may be puzzled to find that, by default, your anti-virus excludes certain file types (or file names) from its scanning. If you were a criminal software developer, wouldn't you prefer to hide your malware in files that are excluded from anti-virus scanning? So, everyone—home users and corporate IT staff alike—should consider tightening up their anti-virus scanning so that it doesn't exclude so many things. We should also consider configuring our anti-virus heuristic scan to make it a bit more sensitive. This will increase the effectiveness of our anti-virus by decreasing our false negatives, but on the other hand we may start to see some false positives. More about this next.
- Analyze all false positives. Once we have tuned our anti-virus to become more sensitive (as described above), we can expect more fallout from our scans, and these will take time to resolve. Plan for it (as if we have any extra spare time!). Also, we need to learn how to check our anti-virus log (otherwise we may not even know that it's finding malware). It's a good idea to write a short procedure to remind ourselves of how to deal with any possible malware we find. VirusTotal—the web site I mentioned previously—is our best friend. Simply upload suspicious files to see how many anti-virus products will identify it as benign or malicious. If VirusTotal has already seen this file, have VirusTotal re-scan it (i.e., with the latest anti-virus signatures).
- Write a Malware Incident Response Procedure. The worst time to write this document is during a massive malware attack that has made our personal computer as useful as a bag of rocks, or has brought our company to its knees. (Yes, I recall the Conficker/Downadup infections of 2008-2009) The best time to write this document is when we have time to think it through and get it right. The Procedure should include the sections shown below. (If you want an example of a very comprehensive malware incident prevention and handling guide, you can download Special Publication SP 800-83 from the National Institute of Standards and Technology's web site.)
- Purpose: An overview of when this Procedure is to be used, and what it will accomplish. Will this Procedure be invoked every time anti-virus finds something, or when someone notices a computer acting strangely, or just when a fairly large outbreak is noticed? Will each infected computer be "wiped" and re-imaged automatically, or will computers be repaired and disinfected file-by-file? Will any forensics be involved, such as for infections of upper management's computers?
- Responsibilities: An overview of who will be responsible for carrying out this Procedure. IT staff? Individual users? Contractors or third parties?
- Prioritization: How each malware incident will be prioritized, so as to react appropriately. Is a worm infection higher priority than a Trojan infection? Is a 100-computer infection more important than single-computer infection (even if it's the CEO's computer)?
- Categorization: An overview of each type of malware incident covered by this Procedure. Will a rootkit be treated the same as a keylogger? A worm the same as Adware? A large customer's computer the same as an employee's computer? A finance department computer the same as the receptionist's computer?
- Timeline: How long this Procedure will be allowed to take for each priority level and category. One hour? One day? One week? Will the schedule be any different after normal business hours?
- Mitigation: An overview of how each malware incident will be resolved, for each priority level and category. What tools, accessories (such as write-protectable flash drives), or software (such as Linux LiveDVDs, "emergency" anti-virus disks, Nmap, or Wireshark) will be required for this Procedure? Will infected computers be transported to the IT department, or will someone visit each infected computer?
- Recovery: An overview of how lessons learned from each incident will be captured to help improve this Procedure and/or to allow the organization to improve its defenses against future malware infections.
- Use alternative web browsers. As explained previously, criminal software developers target popular combinations of operating systems, web browsers, browser plug-ins, application software, and runtime environments (such as Java) through drive-by downloads. Switching to an "unpopular" web browser can defeat drive-by downloads that depend on a specific web browser. You can see what web browsers are currently popular here. At the end of 2010, Internet Explorer (IE) was still the dominant web browser, hence it was still the prime target. Firefox is gaining on IE fast, so we can assume that it, too, will become a prime target in the next year or two. But for now, Firefox, Opera, Chrome, and Safari are less likely to fall prey to drive-by download attacks than IE. One reason is that alternative browsers usually don't support ActiveX-based plug-ins like IE does. ActiveX is very powerful software, and many third-party ActiveX plug-ins contain vulnerabilities that have been exploited by criminals to gain full access to people's computers. Another reason is that Microsoft issues security updates for IE only once per month maximum, compared to other browser vendors that issue security updates on a more rapid, as-needed basis. For example, Mozilla has been known to push out security updates only a few days apart for its Firefox browser. Corporate IT staff may face a dilemma when deploying an alternative web browser, since some internally-developed web applications may require ActiveX or may "work best when using Internet Explorer". These issues will require some work to fix.
- Disable automatic handling of web content. Some web browsers, like Firefox, allow the user to configure how each type of web content is handled. Consider content such as PDF files or QuickTime movies. Most web browsers (Firefox included) will handle such content automatically by finding and running a "helper application" such as Adobe Reader or a movie player (or their equivalent plug-ins). This allows a drive-by download to succeed automatically in the background. But if we configure a browser such as Firefox to "always ask" before finding and running a helper application for each type of content, drive-by downloads won't succeed unless we approve. The trick is to react appropriately when the browser notifies us that some content needs to be handled. Basically, we just need to say "no" when we're surprised, and "yes" when we expected it. Here are two trivial examples. Suppose you surf to a normal web page and begin reading it, but suddenly (even though you didn't click on anything) Firefox asks you how you want to handle a PDF file. You would click "Cancel" because you didn't expect or want a PDF file. Very suspicious! Or, suppose you click a link to download a PDF file and Firefox asks you how you want to handle an MP4 video file. You would click "Cancel" because you expected a PDF file, not a video file. Very suspicious!
- Use alternative operating systems. The same points made previously about web browser popularity apply to operating systems too. Criminal software developers don't have time to target unusual desktop operating systems. Instead they attack popular operating systems based on market share. You can see what desktop operating systems are popular here. At the end of 2010, Windows XP is still the most widely deployed desktop operating system in the world and is therefore the most widely attacked operating system in the world. But Windows 7 is gaining on XP, and the popularity of Apple's iPhone and iPad products may increase Mac OS X's popularity, and Google's Android operating system is increasingly popular for mobile devices, so we can assume that those, too, will become more of a target in the next year or two. But for now, everyone—home users and corporate IT staff alike—should consider using Windows 7, Mac OS X, or GNU/Linux on their desktops instead of XP. Regarding Linux, it is doubtful that the market share of the desktop version of Linux will ever exceed a few percent, hence it is doubtful that desktop Linux (and all of its application software and libraries) will be targeted by criminals anytime soon. Many home users and employees of fairly small companies use their computer primarily for checking e-mail, writing documents, creating spreadsheets, and surfing the web, so these users are good candidates for using Linux instead of Windows.
The result was a malware infection disguised as an "anti-virus security warning" that took control of her screen, keyboard, mouse, and power button. This was designed to convince her to pay $69.95 to purchase "anti-virus" software to remove "security threats." She fought with her computer for days and almost paid the ransom to get her computer back to normal. I performed a forensic analysis and found how a malicious PDF had been passed straight through Firefox to Adobe Reader. If she had installed the newest version of Adobe Reader she may have been protected. Or maybe not, since a different version of PDF file may have been delivered instead, and the end result may have been the same. New vulnerabilities in Adobe Reader are being found every day. To mitigate this risk we need to use an alternative PDF viewer, such as Foxit Reader. It's important to uninstall all previous versions of Adobe Reader, since these can still be exploited if still present.
- Use other alternative application software. "Everyone" uses Microsoft Office, WinZip, QuickTime, RealPlayer, and other popular software. Therefore, criminals will exploit vulnerabilities in these applications. We can mitigate these risks by using OpenOffice, 7-Zip, VLC, and other less-widely-used applications instead. It's important to uninstall the popular versions we're no longer using, since these can still be exploited if still present. It's also important to check the new versions' popularity once a year, in case the app that no one was using a year ago has skyrocketed! Case in point: Firefox.
[As a side note, the use of less-popular software to increase security has sometimes been dismissed by incorrectly equating it with the false principle of "security through obscurity". Publicly available software like OpenOffice, 7-Zip and GNU/Linux are certainly not "obscure", for hackers can download these software products as easily as anyone else. The Opera web browser's User Agent is just as visible to web sites as Internet Explorer's, so hackers could target Opera with drive-by downloads as easily as they target Internet Explorer.]
- Live in a limited Windows user account. This is perhaps the single most important thing we can do to mitigate risks. But as mentioned, creating a limited user account is easiest when configuring a brand-new computer. Otherwise we may have to spend a whole day figuring out how to move all our stuff from the admin account to a new limited user account.
- Make sure "AutoRun" and "AutoPlay" are disabled. Some details were mentioned previously. It's incredibly important to disable these (despite Microsoft's own contradictions and confusion factors in their documentation). Doing so will close an important security hole in Windows' handling of removable devices such as flash drives, which malware currently targets.
- Don't share flash drives. Make a habit of not sharing flash drives with friends and coworkers, and don't bring your flash drives to and from your workplace. This will reduce the spread of malware and protect you more than if you shared freely. In practice, we will share flash drives, but we can certainly limit our exposure.
- Use multiple anti-virus scanners. This was discussed previously. The devil is in the details (such as my points about preventing anti-virus scanner conflicts).
- Uninstall Java or disable it in all browsers. Most of us don't use Java yet we still have it on our computers. This places us at risk of drive-by downloads as mentioned previously. We can mitigate that by simply removing Java or disabling it in all of our browsers. But mitigation becomes tedious if we depend on Java and cannot remove it. In that case our first priority should be to install the latest version and remove all older versions. If you absolutely need an older version, good luck! And if you depend on Java for various web sites, good luck!
- Disable or remove unused browser plug-ins. An unused but vulnerable plug-in is just as useful to criminals as a constantly-used vulnerable plug-in. To mitigate this risk we need to simply disable or uninstall all plug-ins we never use.
- Keep operating systems up to date. For Windows, it's important to ensure that automatic updates are enabled and that such updates are automatically installed too (not just downloaded). Microsoft publishes security updates on a regular schedule: it's called "Patch Tuesday", and it's the second Tuesday of each month. A few "out-of-band" patches have been issued at random times between Patch Tuesdays, but these have been rare. Newly released updates may not be actually available until the afternoon in the U.S. There is no reason to wait for automatic updates to install patches; we can update our computer manually any time we wish. Corporate IT staff may wish to learn about patches ahead of time and may even wish to test them before widely deploying them across the enterprise. Microsoft publishes advance warning of upcoming updates in special Security Bulletins.
- Uninstall unused application software. We need to know that a "drive-by download" can infect our computer by exploiting a vulnerability in application software that we don't even use. Home users often "collect" software on their computers, never to use it again. Corporate IT staff have more control over what's on enterprise computers, but should still consider removing unneeded or unwanted software such as QuickTime from business computers anyway.
- Keep application software and browser plug-ins up to date. We can greatly reduce risks due to popular, vulnerable software applications and plug-ins by upgrading to newer versions that don't contain so many well-known security holes. This is easier said than done. Three problems exist. First, most application software and plug-ins have no update process at all. Examples of these are the vast majority of downloadable "gee whiz" plug-ins and consumer applications sold in online or brick-and-mortar stores, ranging from crossword puzzle games to photo editors. Browser plug-ins that came pre-installed in our computer are the worst to figure out how to update, since we cannot easily tell where they came from or what application software they belong to. To update such plug-ins and applications we literally have to buy or download a new version (if we can find it). Second, even if all of our software has an update mechanism, we may have many dozens of applications installed. That means updating each one separately, which can be tedious and error-prone. No one offers a "one-click" method to update ALL of these at once. Third, each plug-in and application has its own potentially weird or difficult upgrade method (think Flash Player during most of its lifetime).
- Read the Infosec news once a week. The threat landscape keeps changing, so it's important to keep up. For example, I had never heard of malware targeting Macs until Sophos published its annual Security Threat Report for 2007, where they stated that financially motivated criminal hackers had compiled a few pieces of serious malware to target Macs for the first time. Similarly, Adobe Reader was never a target before 2007, but by 2009 a huge number of malicious PDFs targeting Reader were circulating in the wild. One way to keep up with events is to read a summary of information security news each week. If you can't easily locate such news, try The Register, The H, or Security News Portal. You may have to click one or two levels deep into those sites to find the type of information security news you need, but this should be fairly easy to find.
- Revise your Malware Action Plan every year. The information security news items we read each week should strengthen our Malware Action Plans over the long term. It's always a good idea for us to write down our Malware Action Plan. Then, we can't forget what we decided, and we can adapt it over time.
The news story that prompted me to write this blog entry is sad but increasingly typical, since more and more people are putting more and more of their personal details online. And they're doing this seemingly without regard for the eventual consequences, probably because few people ever consider that someone—whether a stranger or not—would use such details against them in a big way. In this case, women's lives were devastated by a stranger who used nothing more than their published personal details plus their "secret questions" against them.
As everyone probably knows by now, in online accounts a "secret question" is a question about us that "only" we know the answer to. Such as "What was your first pet's name?". The purpose of secret questions is to allow online account passwords to be reset if forgotten.
Back in 2005, a well-known security figure named Bruce Schneier spoke against the use of "secret questions" in online accounts because these seriously weakened everyone's security. Why do "secret questions" weaken security? Because:
- "Secret questions" completely bypass the account's password, rendering the password useless.
- Answers to "secret questions" remain the same even if the password is changed.
- To a computer, there is no real difference between a complete stranger who doesn't know someone's password, and the actual account holder who has forgotten his password. They're both just people who don't know the password!
- Most answers to "secret questions" are actually known by a lot of people other than the account holder (i.e., her family, relatives, friends, neighbors, coworkers...).
- Many answers to "secret questions" can be discovered by complete strangers who live far away, by simply reading the account holder's public online "footprint" (i.e., her Facebook page, blog entries, forum posts, etc.).
Schneier's statements prompted a lot of discussion about this topic, but it's almost as if someone else (let's call him "The Anti-Schneier") gave the exact opposite advice, and all of the online services believed him instead!
(You can see all of my cartoons here.)
The fact is, more and more online accounts have adopted "Secret Questions" since then. More and more accounts demand that we use "Secret Questions". And more and more accounts are being compromised as a result.
Everyone probably recalls how David Kernell used publicly available information to hack into the personal Yahoo! e-mail account of vice presidential candidate Sarah Palin in 2008. But other account hacks have had far worse consequences.
One of the worst cases I've heard about is where a California man, age 23, broke into e-mail accounts of 3,200 women who were complete strangers to him. He also put explicit photos of around 175 of these women on their own Facebook pages, and also e-mailed such photos to these women's friends.
The offender was no hacker. He was able to break into these women's e-mail accounts by simply answering their account's "secret questions". How did he know these answers? He merely searched Facebook and other online services and scraped together information on his victims until he could answer many of the common "secret questions" used by e-mail providers such as Gmail, Yahoo! Mail, Hotmail, and others.
Once he had gained access to his victims' e-mail accounts he was able to gain access to their Facebook accounts, presumably by having Facebook reset his victim's passwords and send a confirmation e-mail to their accounts (which he had access to!).
Let's check the facts of this case and extract some action items that we can use to reduce our risks:
- Fact: "Secret questions" defeat online account security. To mitigate this threat we need to make the answers to our secret questions as strong as our normal password. We can do this by using non-factual answers that, truly, no one else would know or guess. For example, if the "secret question" is "In what city were you born?" we could answer "Detroit" when we were actually born in San Francisco. Or we could simply answer "v8e84jrt%". This technique requires us to record our answers somewhere so we can use them later if needed.
- Fact: It doesn't take a hacker to break into an online account. A hacker would probably focus on using a cross-site scripting attack on the login page to gain access to the web site's back-end database. But an evil coworker would simply use what he knows about his intended victim—such as where she was married—to gain access to her accounts. In information security terminology, the evil coworker is an "insider" threat. One way to reduce insider threats is to provide people with information on a "need to know" basis only. In social terms this means limiting the information we give to other people. Today, Facebook has made it possible for us to give a whole lot of people unlimited information about us. This doesn't mean that we have to! Therefore, one way to reduce risks from information we published on Facebook is to comb through our profile information and posts, and delete personal information that no one probably cares about anyway.
- Fact: Some banks use "secret questions". It's bad enough for someone to break into our e-mail account, but it's much worse if they break into our bank account instead. Yet some banks use the same kinds of "secret questions" as e-mail providers do. But banks sometimes also use a different kind of "secret question" to verify people's identity during customer service calls. Such questions and answers are completely open-ended, which means customers can make up any question and answer they want (they are not limited to a few canned questions). This can be more secure because it's harder to guess the answer to an unknown question than a typical canned question. To reduce the risk of someone impersonating us, we should never choose simple, common questions such as "What was your first car?". Instead, choose offbeat or wacky questions, such as "What are the first 10 digits of PI?" or "Whose bank account did you hack into last year?"
- Fact: Some people send explicit self-portraits from their e-mail accounts. I mention this only to warn readers not to do this, as this could be devastating in the future. More than three thousand women can attest to this.
I've spent the last six years reading many startling information security news stories. Most of these news stories caused my eyes to widen; then a light bulb went on above my head as I connected the dots; and then I tried to take action to reduce my business risks (and my family's risks too). I often forwarded these same news stories to my consulting clients, professional peers, relatives, and friends, hoping they would be able to achieve a similar understanding and derive similar plans of action.
But I finally realized that merely forwarding a news story didn't actually inform anyone of what the story really meant. And it didn't actually inform anyone of how to react to the news story in a useful way.
That's where The Brent Report comes in. Every entry in The Brent Report security blog will dissect the facts of a recent security news story, and extract some action items that can be used to reduce computer security risks not only in the workplace, but at home too.
Feel free to send me your feedback on these blog entries via e-mail. I'll append your comments and information to my blog entries and give you credit if you give me permission to include your name.
Addendum: You may be wondering why The Brent Report blog doesn't allow readers to post their comments directly, like many other blogs do. You may be thinking, "Doesn't this guy know about Web 2.0?" The answer is that I follow my own security advice! I'm well aware of the constant stream of security vulnerabilities found in WordPress and a lot of other blogging software, which allow amateur "script kiddies" as well as serious hackers to hijack blogs for their own purposes ("lulz", blackhat SEO, comment spam or trackback spam, etc). Why, even PBS.org was hacked and defaced in 2011 due to a vulnerability in Moveable Type blogging software. I'd have to set up a CAPTCHA engine and then watch my blog like a hawk to prevent someone from tainting my blog with extraneous content. No, thanks; I've got better things to do. Just e-mail me, please!
« Previous 10 | Next 10 »