designsbylareau.com
Multi-discipline Technology Consultants


Our Mission: To reduce your personal and business risks by deriving action items from recent news stories.

Note: Brent LaReau is your point of contact for this blog.

« Previous 10 | Next 10 »

018

A Web Site Can Reset Your Samsung Phone to Factory Defaults

Permalink Brent LaReau, designsbylareau.com
Posted: Oct 12, 2012

The headline shown above seems too extreme to be true, doesn't it? But in fact, if you are reading my blog on a specific type of Android phone made by Samsung, I could have instantly "wiped" your phone RIGHT NOW by simply embedding a specific "USSD" code here. You wouldn't even have had time to read these sentences. Instead, your phone would have begun to reset itself as soon as it loaded this web page. When it was finished you would have found:

Information Security Cartoon
(You can see all of my cartoons here.)

Fortunately, not all Samsung Android phones are vulnerable to this attack. Full details are not yet known, but at least we know the following Samsung phones are vulnerable:

Read More...

How did this enormous vulnerability creep into Samsung phones? There is a short answer and a longer answer. The short answer is that Samsung's software development teams created a "dialer" app for its Android phones, which will instantly execute any USSD code without asking the user to confirm this action. One of those USSD codes will wipe the phone (reset it to its factory original condition). And if any of those USSD codes is embedded in a web page, the code is immediately executed when the page loads.

That was the short answer. Now for a longer answer that some people won't like: agile software development methodologies permit companies like Samsung to make 10,000 software modifications—new features and bugfixes—each year in a frantic race against their competitors. We have to admit this rapid, incessant march forward is quite an accomplishment, but how can an agile design team consider the consequences of each software modification when their main goal is to do more and do it faster?

How can an agile culture that revolves around optimization of business processes (not security processes) avoid oversights and mistakes that place end users at risk—to the delight of malicious hackers, teenagers, and blog writers like me?

How can the "user stories" embraced by agile methodologies include security considerations when the "user" is just an average consumer instead of a security expert?

And how can the "unit tests" embraced by agile software developers even begin to address system security issues at all? Especially since many lean, agile teams act as if unit tests can replace integration testing and system testing!

[Update: February, 2013—Samsung isn't the only vendor that fails to assess security issues when developing its products and software. The Federal Trade Commission (FTC) has announced a settlement with HTC over complaints about lack of security in its mobile phone software. The FTC stated that HTC made little or no effort to address user security when HTC customized Android and Window Phone software for its smartphones. HTC's software was claimed to be sloppy; HTC didn't train its design teams in secure software development practices; HTC didn't perform any penetration testing on its mobile devices; and HTC's staff used software development methods that are well-known to be poor security practices.]

Let's peruse the facts of this case and generate some action items that we can use to reduce our risks:

You can read an original news article about this topic here. You can contact me here.

Permalink


017

Malware Can Now Infect Virtual Machines That Aren't Running

Permalink Brent LaReau, designsbylareau.com
Posted: Sept 19, 2012

I have lots of virtual machine (VM) files on various computers and external hard drives. VMs are incredibly useful for developing and testing software; for evaluating different operating systems (even Android!); and for learning how to install and configure applications before installing these "for real".

But ever since I started using VMs I've been concerned about malware infections. As far as malware is concerned, attacking a running VM is no different than attacking a running physical computer. Therefore, a running VM suffers the same infection vectors as a physical computer:

But now we have another infection vector to worry about: a new strain of malware called "Crisis" has been identified that can find and infect virtual machines that are not running.

Information Security Cartoon
(You can see all of my cartoons here.)

Read More...

Let's check the facts of this case and derive some action items that we can use to reduce our risks:

You can read an original news article about this topic here. You can contact me here.

Permalink


016

Mom Changed Her Kids' Grades in School's Computer and Accessed School Employees' E-mails and Personnel Files

Permalink Brent LaReau, designsbylareau.com
Posted: August 1, 2012

When I read the news story that prompted me to write this blog entry, I thought, OK, how many times do we need to read about something like this, to finally realize that apparently anyone—even fairly average 45-year-old moms—can and will gain unauthorized access to someone's computer system? We should ask ourselves this question: "What makes me think that one of my employees—or their moms or sons or cousins—won't gain unauthorized access to my computer systems even once, let alone 110 times like this mom did?"

I used to think people won't break into computers because it's unlawful and just plain wrong. But that didn't stop this mom, who later agreed her actions were unethical but she didn't think they were illegal.

And, if we ignore the human factor for a moment, do we really think that our small business computers are somehow magically more secure than those owned by Northwestern Lehigh School District (which is where "mom" worked)?

Information Security Cartoon
(You can see all of my cartoons here.)

Let's dissect the facts of this case and extract some action items that we can use to reduce our risks:

Read More...

You can read an original news article about this topic here. You can contact me here.

Permalink


015

Company Almost Fell Prey to Industrial Espionage via Flash Drives

Permalink Brent LaReau, designsbylareau.com
Posted: July 19, 2012

After reading a recent news story about flash drives and industrial espionage, I wondered what my consulting clients' employees would do if they found a flash drive in their company's parking lot?

Let's face it: most employees would plug it into a computer to see what's on it. And then a naive little convenience feature in Windows would either automatically run software programs that Windows discovered on the flash drive, or Windows would kindly ask if the employee wished to allow such programs to run. And of course most people would instantly click "Yes" or "OK" without even reading the warning. The end result is that any malicious software (malware) residing on that flash drive would shout "Yippee!" and then install itself on the computer and happily begin its dirty work of uploading passwords and Excel spreadsheets to computers located in China.

Information Security Cartoon
(You can see all of my cartoons here.)

Is this a ridiculous scenario? But that's exactly what almost happened to DSM, a major chemical company in the Netherlands. One of its employees did find a flash drive tainted with password-stealing keylogger malware in the parking lot, but he was smart enough to immediately turn in the flash drive to DSM's IT department. In turn, IT staff members were smart enough to analyze what was on the flash drive instead of just plugging it in like an average person would. The spyware they found was designed to steal usernames and passwords and then upload these to a remote server on the Internet. They quickly blocked the remote server's domain or IP address on their network to prevent data leakage.

Using infected flash drives to smuggle malware into companies has become a regular occurrence in recent years, according to security researchers.

Let's analyze the facts of this news item and formulate some action items that we can use to reduce our risks:

Read More...

You can read an original news article about this topic here. You can contact me here.

Permalink


014

SWAT Team Raids Wrong Home Due to Unprotected Wi-Fi Network

Permalink Brent LaReau, designsbylareau.com
Posted: July 9, 2012

The headline above is from an eye-opening news story I read recently. You can re-create this disaster in your own home or company. The ingredients are simple. First, go to Walmart, Best Buy, or CDW and purchase an inexpensive wireless access point or wireless router. Second, leave all of its settings in their default state and simply connect it to your DSL or cable modem (or your company's Internet connection). Finally, take any laptop computer, netbook, or smartphone and connect it to the Internet via your new wireless network. No password is required because you haven't enabled encryption!

The only problem is that your immediate neighbors (whether in an industrial park or at home), and even passers-by, can connect to your new open wireless network too. And then use your Internet connection for free. For whatever harmless or harmful purposes they want. And if authorities eventually track the harm to its source, they will naturally find that it's you. And if the harm is great enough, they will send in a SWAT team who will toss a couple of "flashbangs" (stun grenades) into your home or company to get your attention, and then interrogate or arrest your family or your employees.

OK, so open Wi-Fi is bad news. But those of us who HAVE protected our Wi-Fi networks at home and at work actually face the SAME risks as those who have open Wi-Fi networks. This is not obvious at all, so it will be explained in detail below.

Let's examine the facts of this story and derive some action items that we can use to reduce our risks:

Read More...

You can read an original news article about this topic here. You can contact me here.

Permalink


013

IBM Outlaws iPhone's Voice-activated Digital Assistant ("Siri") as It Leaks Data to a Third Party

Permalink Brent LaReau, designsbylareau.com
Posted: June 4, 2012

After reading a news story about IBM and Siri I began to dig into the details. And the deeper one digs into this Siri phenomenon, the more interesting it gets!

According to Apple and Wikipedia, Siri is an iPhone 4S application integrated into the iOS operating system that "lets you use your voice to send messages, make calls, set reminders, and more. Just speak naturally. Siri understands what you say." Sounds very convenient. So, why is IBM preventing employees from using it? Will other companies follow suit?

The answer lies in the details of how Siri works. We must begin by understanding that a little iOS software application running on a little 800MHz dual-core CPU—which is significantly less powerful than a laptop computer's CPU—cannot decode speech. It takes more horsepower than that.

Next, we need to understand that Siri "knows" what we are talking about only by establishing a personalized context to interpret words within. For example, "mike" can be either a person's name or an abbreviation for "microphone". If we routinely call someone named "Mike" then Siri should know that "mike" is a person with a phone number. On the other hand, if we are a singer or musician then Siri should know that "mike" means "microphone". A little app on a little CPU cannot know these things.

As you may have guessed by now, Apple's Siri software merely transmits our speech, plus a lot of other information stored in our iPhone to establish context, to the cloud. Specifically, Siri transmits our context data, plus raw audio that has been compressed using the Speex audio codec, via the HTTPS protocol over 3G or WiFi to Apple's large data center in Maiden, North Carolina. There, powerful CPUs, large application software, and extensive databases can be harnessed to decode our speech in near real-time so that a prompt, accurate, and appropriate response can be sent back to our iPhone.

Therefore, Apple's personalized context database in North Carolina stores everything our iPhone knows about us, including our address book contents, our GPS coordinates, the names of songs we listen to...

Read More...

I used to think that Google was incredibly intrusive, as they pretty much keep track of everything we do on the Internet each minute of the day. This is known to anyone who has placed a packet sniffer on their computer while they surf, or who has reverse-engineered the JavaScript code that Google runs on our computers for almost every web site we visit.

But now it sounds like Apple is as bad as Google, and we can understand why IBM uses Apple's Mobile Device Management (MDM) framework built into iOS to disable Siri on its employees' iPhones. IBM sees no good reason for a third party to have intimate details about their employees, family members, business partners, new projects, e-mail accounts, physical locations, or etc.

And consider this: could hackers obtain a copy of these intimate details by sniffing CDMA, GSM, or Wi-Fi packets, or by hacking into Apple's cloud? Hackers have surely figured out how to drain information from many large data sources; why not Apple's data center too?

Let's look at some facts for this topic and define some action items that we can use to reduce our risks:

You can read an original news article about this topic here. You can contact me here.

Permalink


012

Does Your Industrial Control System Have a Back Door?

Permalink Brent LaReau, designsbylareau.com
Posted: May 10, 2012

Back in February I had blogged about how 10,000+ industrial control systems were found to be connected to the Internet, even though this violates both "best practices" and vendor recommendations. Worse, only 17% of those systems required a password.

It didn't seem that things could get any worse than that. But now, according to a more recent news story, a researcher has discovered that one brand of industrial network switches and servers that are commonly used in control systems contains a "back door".

Industrial control systems are used everywhere in industrial sectors and critical infrastructures. They literally run the whole planet. Everything would screech to a halt if we simply unplugged all of them at once. Factories would cease production, sewerage treatment plants would back up, and we couldn't even buy a can of Coke.

It's a big deal when they don't work correctly. Aside from ordinary software bugs that can cripple industrial control systems, we're increasingly worried about hacktivism and other terrorist activities causing these control systems to fail in big ways. That's why the U.S. Department of Homeland Security created the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT). Its stated purpose is to conduct vulnerability and malware analysis, provide on-site support for incident response and forensic analysis, provide situational awareness (intelligence), coordinate responsible disclosure, and share information and threat analysis through published alerts. You can learn more about ICS-CERT here.

OK, so everyone is concerned about hackers breaking into industrial control systems. So why did RuggedCom put a backdoor in its Rugged Operating System, which is used in its industrial network switches and servers? A backdoor is a hidden means to gain remote access to a system or its software. Backdoors bypass normal authentication methods. And remote access through backdoors usually isn't logged anywhere, so access is entirely under the radar. Most users have no way to know if any backdoors exist in their systems or software.

You may recall that the 1983 movie War Games was based on the premise that a teenage hacker found a backdoor in a secret government computer system. Real-life hackers like to find and exploit backdoors in computers, systems, and software, too.

Let's break down the facts of this situation and develop some action items that we can use to reduce our risks:

Read More...

You can read an original news article about this topic here. You can contact me here.

Permalink


011

91% of Small Healthcare Practices in North America Suffered a Data Breach in 2011

Permalink Brent LaReau, designsbylareau.com
Posted: April 16, 2012

While reading a news story about healthcare practices being hacked into, I remembered how my doctor and I had been in a rut for about three years. Once a year he'd say pretty much the same thing: "I see that you haven't signed up for online access to your medical records. Would you like to sign up? It's quick and easy. Then we can activate your online account and give you a temporary password. You can see all of your test results immediately instead of having to wait for us to mail them to you. And you can send me a message any time you wish."

And then I would count to 10 and give pretty much the same reply each time: "No, thanks. I don't want my personal medical records to be accessible on the Internet. I'm a consultant and one of my specialties is information security. I read about data breaches all the time and I'm familiar with how hackers gain access to computer systems."

And then he would pretty much offer the same rebuttal (sometimes with a slight frown): "I'm on the board that oversees our computer security. We've never had a hacker break in. Our web site uses a secure connection and it's password-protected. Most people use it and no one has reported a security problem."

To which I would always reply (after counting to ten again): "No, thanks. Just mail the information to me."

The interesting part is that even though we disagreed, he and I both had factual, self-consistent viewpoints that gave us confidence:

Read More...

My doctor hasn't mentioned my lack of an online account recently. If he mentions it again, I'll send him a copy of the news article that prompted this blog entry ("Most Small Healthcare Practices Hacked In The Past 12 Months"). The article states that 91% of small healthcare practices surveyed in a North American study claim to have had a data breach during 2011. This was based on a survey of 700+ organizations with 250 employees or less. Examples of "small healthcare practices" are physicians' offices, dentists' offices, home healthcare services, health clinics and nursing care facilities.

Information Security Cartoon
(You can see all of my cartoons here.)

Dare we compare our own organizations with 250 employees or less, to the small healthcare organizations mentioned in the news article? Can we learn from their experiences to reduce our own risks of a data breach?

Let's study the facts of the news article and draft some action items that we can use to reduce our own organization's risks:

You can read an original news article about this topic here. You can contact me here.

Permalink


« Previous 10 | Next 10 »


Home


What We Do


Contact Us

Brent Report
Security
Blog
Bombardier
Security
Cartoon
Our
Success
Stories

Kudos

Page validated by:
Valid HTML 4.01 Transitional