Thursday, December 15, 2011

Thursday, December 8, 2011

Card skimmers at a supermarket chain – Inside job?

From the Mercury News (via ars technica):

“Further evaluation uncovered an extra computer board that had been placed inside the checkout machine, recording customers' financial information.”

So…how did a skimmer get inside the checkout machine?

I’ve never been in a Lucky Supermarket, but the places that I frequent that have self check-out lanes all have employees watching the lanes and presumably have security camera coverage.

These aren’t unattended ATM machines or gas pumps monitored by a harried convenience store cashier who has five other customers waiting to check out, so one would think that adding electronics to the inside of the checkout machine would be a detectable event.

The company CFO doesn’t think it’s an inside job though:

“Although the skimmers at Lucky's stores were apparently installed inside the checkout units, Ackerman said the company doesn't suspect an inside job”

Interesting.

Friday, November 11, 2011

How not to disburse financial aid.

Years Decades ago I worked for a small college with strong, forward looking leadership that firmly believed a significant fraction of our interactions with students should be computerized.  He believed that if we automated background bureaucracy we could better handle the budgets cuts and shift more resources into classrooms. He also believed that if students had a clear, consistent interface into our bureaucracy they’d be better, happier learners.

My job was to make that happen. I networked all staff and student computers, computerized all student grading, registration, transcripts, fees; all college accounting, purchasing, invoicing, inventory, room scheduling and whatever else the president though was burrocratic. My toolkit was an ARCnet network, a Novell Netware 2.0a server, a few dozen 80286 computers and a database called Paradox. I turned off the IBM System/36. Their was no WWW.

It took a couple years, but we got to the point where a student could walk up to a kiosk (a netbooted computer running a Paradox client app) see their up-to-the-minute quiz & test scores, how much they owed, what the next assignment was on each course, what assignments were required for course completion, and what courses were required for degree/program completion. A push of a button got them an unofficial transcript on a nearby dot matrix printer.

Instructors entered each test/quiz/assignment into the system. Course grades, program and degree requirements were updated and maintained automatically. Students could look up their grades any time they wanted, with real time refreshes on recently entered grades.

Department heads entered their own purchase orders. Accounting approved them electronically and faxed them out the door. Department heads could look view up to the minute account balances any time, and our president had a kiosk in his office that showed real time account balances on all critical accounts (that turned red if the the account was debited since the last refresh).

We were really close to having our electronic testing center automatically upload test/quiz scores to our records system in real time.

No batch jobs, either. Winking smile

Students who were granted aid or loans often received more in grants/loans than their tuition & fees, so getting that money out to students early in the semester was a critical but very manual process. It’s a huge deal for the students, as many of them need that balance for food, rent, etc. Financial aid was already computerized using a stand-alone commercial package. Manually calculating and applying financial aid disbursements to student accounts was a major work effort, and consumed more than one persons time for the first few weeks of the semester.

To get that automated, all I needed to do was interface that software with the student records/fees package I wrote, and a major paper based time consuming process would be eliminated.

The president put that on my plate for fall start. I wrote a Paradox program that shelled out to DOS, ran the DOS based financial aid application in a window, exported the necessary data from the financial aid application and imported it into the finance module.  It applied the financial aid to the students account and figured out if there was anything left over. If there was, it would print the student a check for the balance.

We tested early & often, as neither the accountant nor the financial aid director was enthused about what to them was a cloudy process that they didn’t control, and I wasn’t enthused about mucking up a few hundred grand in financial aid disbursements. In the weeks before semester start, I’d run the program in simulation mode, the accountant and financial aid director would compare my simulation to their manual calculations, and we’d correct any differences.

Over and over and over….

The big day finally arrives. Ten days into the semester all bills are due and all aid checks get cut. The total disbursement is somewhere in the middle hundreds of thousands of dollars. Our financial aid director went line by line through each students fee statement one last time, compared it to each students financial aid disbursement, manually calculated the expected disbursement and compared each one to the simulation that I ran from my new program.

All was good – except that the accountant (fresh from college, new to the job) and myself (nasty flu, 103f fever) were more than a bit nervous at the prospect of dispersing a half million or so in checks using our own software. 

On the morning of the big day – the day that financial aid gets disbursed and registration payments are due, the hundreds of students that had an outstanding disbursement were lined up at the registrars window not-so-patiently waiting for a check that they could run down to the local bank and cash.

The financial aid director, myself and the accountant carefully opened the safe, removed the signature plates and loaded them in the check signing machine. We removed the check stock from the safe and loaded it into the check printer. We (crossed our fingers and) pulled the trigger on the script that extracted each student’s financial aid data, compared it to their chart of accounts, credited any outstanding account balance and printed them a check for the rest.

The workflow was something like:

  • Accountant: Look at each check, compare it to last nights simulation, run it through the check signing machine.
  • Financial Aid Director: Look at each check, compare it to last nights simulation and her manual calculations and mark an ‘X’ next to the student on the worksheet that she stayed up all night calculating.
  • Me: doze off, fever really dragging me.

An hour or so later, a whole bunch of happy students have their fees paid and their financial aid balances paid out, check in hand. A large fraction of them drove straight to the issuing bank & stood in line to cash the check.

Then we get the call from the bank:

Bank President: “Umm are you guys aware that you are $200k over drawn, and we have a line of a hundred or so students trying to cash checks on that account?”

Accountant: “Ah..Um..Ahh..”

The way our small town college managed it’s banking was to maintain accounts at both of the local banks and use SmallBankA for the seven figure main account and SmallBankB for the petty cash account in year one, then alternate back and forth each year. That way both of the locally owned banks got a shot a managing the large account, and we kept close relationships with the local economy. The main account probably had a $1m balance, and the small account probably had a $20k balance.

Guess which check stock we loaded into the printer?

Fortunately it was a small town and the bank covered the few hundred grand  overdraft for the day that it took to transfer the funds to cover it. The bank president was smart enough to know that we were good for the funds, and graciously covered our mistake.

Saturday, October 1, 2011

Car goes over cliff, undetected

A car goes over the edge of a cliff on a narrow mountain road. The driver survives, but the accident goes undetected for six days. After five days, the family files a missing persons report. Law enforcement tells them that follow up will take days. The family doesn’t wait. They locate the car using their own means with the help of a detective and the phone company, including what appears to be one of the controversial warrantless cell phone locates that law enforcement does millions of times per year.

When the family located their father at the bottom of a ravine, they also found a second car had gone over the edge. That accident was unrelated and also undetected. That driver died.

Number of cars over the edge: Two. Number of cars detected and located by law enforcement/rescue workers: Zero.

No doubt that there will be a call for guard rails. I don’t think it’s practical to put rails on every spot on every road, but I do think that one could devise an inexpensive tell-tale. This could be as simple as a breakable ribbon or a row of hay bales on the outside of the curve. If the ribbon is broke or the hail bales are gone, something happened.

If preventative controls aren’t practical, is there a detective control in place?

Tuesday, September 20, 2011

The sky is falling. Sirens are blaring. Ignore it.

That’s what I do.

I’m located at the intersection of three large counties in an area of the country that has moderately frequent severe weather & tornadoes. All three counties fire up the sirens any time they think there is severe weather any place in the county. I hear the emergency severe weather sirens of all three counties, and I can’t tell which siren is from what county.

Each time I hear a siren (as often as one a week in summer) I can either:
  1. head immediately for shelter (basement) as officially recommended
  2. wake up a computer and research the current weather and radar
  3. ignore them
Unfortunately many of the residents of Joplin, MO appear to have chosen options 2 or 3, some of whom died as a result of their choice.
In interviews with nearly 100 survivors of the tornado, NOAA officials found that the perceived frequency of warning sirens that night and in previous storms caused people to become "desensitized or complacent to sirens" and to not take shelter.
"Instead, the majority of Joplin residents did not take protective action until processing additional credible confirmation of the threat,"
In other words the emergency sirens are not credible unless combined with other information sources. I don’t doubt that for a second. I do not consider the county emergency sirens to be credible unless I verify them with some other source (radar weather, for example). There simply are too many false alerts.

In IT security, do we spam our users/customers with so may warnings that we are no longer a credible source?

Tuesday, September 13, 2011

Comcast Internet Essentials - Low Cost Internet

Comcast is bringing their ‘Internet Essentials’ to our local service area. Under this program, families who qualify for free school lunches are eligible for $10/month internet from Comcast.

Kudo’s to Comcast.

I see programs like this as an important factor in reducing the number of “have nots” in the already wide disparity between those who have access to broadband and those who do not. Broadband today is as critical to rural and economically poor areas as electricity was in the 1930s and 1940s. Back then, a rural farmer that had electricity could dramatically improve their productivity versus farmers with no electricity.

In the 1930s, my grandfathers sister moved from an electrified area of Wisconsin to a farm in Minnesota with no electricity. She had to pump water by hand, wash clothes by hand, heat the farmhouse with a wood stove, light kerosene lamps…

Today in Minnesota we have rural area’s where there is no wired broadband coverage, and we have both rural and metro areas where people with low incomes can’t afford the $30-50/month broadband entry fee. One of our CIO’s made it clear (to me) how important this is when he offered that bandwidth to his college was nowhere near as important as bandwidth to the rural area around his college. Rural students were dissuaded from taking classes because they would be forced to complete much of their class work while at the college rather than at home. For some, that’s a barrier.

FWIW – For the last ten years or so, we’ve been using Comcasts metro area gigabit Ethernet as the wide area network connection for about a dozen of our metro area colleges. The service is less expensive than any competitors and it has been at least as reliable as services from other carriers.

Sunday, September 11, 2011

What do Linux.com & Kernel.org have in common?

linux.com

 

 

 

kernel.org

Down for maintenance. Hacked…pwned…rooted…

Can you imagine the holy shitstorm that the Linux fanboys would be flinging out the door if this had happened to Microsoft?

The root cause analysis on these will be interesting reads.

Saturday, September 3, 2011

HP Drops State of the Art Tablet, Re-Introduces Antique Calculator

HP’s Touchpad Tablets are dead. HP’s RPN calculators are back.

What’s next, single pen flatbed plotters?

BTW- I must be old. I still have an HP 11C…

IMAG0041

…and I remember when we upgraded our single pen flatbed plotter to a state of the art 6 pen moving paper plotter complete with automatic pen selection. Instead of the plotter stopping and waiting for you to switch from the black pen to the red pen, the plotter would automagically put the black pen back into the carousel and pick up the red pen.

We were impressed. Smile

Thursday, September 1, 2011

Kernel.org hacked…

The message from kernel.org is consistent with the message from pretty much everyone that gets hacked.

  • Don’t worry
  • Be happy
  • We know what we are doing
  • Everything is Ok

I’ll be looking forward to something resembling ‘full disclosure’. It should be an interesting read.

Saturday, August 27, 2011

Oracle 11.2.0.n - Sev 1, Sev 1, Sev 1, and Sev 1

 

Screen shot 2011-08-25 at 12.15.53 PM

One database, four SR’s at Sev one. The oldest one has been a one for 16 days.

Nice, eh?

We’re pretty sure that Oracle 11.2.0.wtf doesn’t play anywhere near as nice with our workload as 10.2.0.[45].

FWIW - The ‘SUN box stuck’ SR is open because a diagnostic script that Oracle had us run deadlocked a DB writer on libaio bug in Solaris 10 (Bug 6994922).

Thursday, August 18, 2011

Deprovisioning as a Security Practice II

In Service Deprovisioning as a Security Practice, I asserted that using a structured process for shutting down unused applications, servers & firewall rules was good security practice.

On the more traditional employee/contractor deprovisioning process, I often run into managers who view employee deprovisioning as something that protects the organization from the rogue former employee who creates chaos after they leave. If they feel that the former employee is leaving on good terms and unlikely to ‘go rogue’, they treat account deprovisioning as a background, low priority activity.

There is obviously an interest in protecting the organization from the actions of the former employee, but something that is just as important to me is to  protect the employee/contractor from events that happen after they leave. I’d really hate to see someone get blamed for an event that happened after they left our employment. That’d be really unfair to them.

For employees who are leaving on good terms, making sure that they are properly disabled is essential to insure that they don’t get blamed for things that they didn’t do.

Tuesday, August 16, 2011

Have all big government internet projects

According to a UK ePetition by Harel Malka, we should:
Have all big government internet projects pass the approval of a technical panel made of professionals from the tech statup[sic] sector.
This is an interesting idea – and one that I could buy into (under the right conditions…)
I’m a government employee that manages systems and projects that run into the millions of dollars. Would advice from the private sector help me?

Maybe.

Caveats:

Private sector consultants are in it for the money. I can pay them for advice, but in all honesty, it’s not a sure thing that I’ll get advice that is worth more than what I paid. I’ve seen plenty of cases where ‘Those who can, do; those who can’t teachconsult.’

Free advice from the private sector IT might fall under a different umbrella though. Presumably one could find skilled private sector IT practitioners who have an altruistic motive rather than a profit motive, and presumably one could find skilled persons who can donate sufficient personal time to solving public sector problems, or who work for a corporation that is willing to release them to advise on public sector projects instead of performing miracle's for profit in their own corporation.

The ePetition is a bit off the mark though. It asserts that:
All proposals of high budget IT projects should pass through a panel of independent professionals from the private sector who are experienced in running large scale internet start-ups. [emphasis mine)
I’d suggest that there is no reason to think there is a relationship between large scale startups and large scale IT projects involving legacy business processes, government rules & laws, legacy systems, legacy processes, public sector budget cycles, etc. I’d rather see advice from those who are experienced in large scale IT projects, rather than successful startups. I don’t think that’s the same skill set.






Monday, August 8, 2011

Gig.U, Gigabit to the Home

Gig.U is on track. That’s cool.

I’ll be very interested if Gigabit Ethernet to the home makes a difference to the ordinary home user. I’ll go on record and say that I don’t think it will. The Gig.U experiment might come up with novel and interesting uses that can’t be met by a 10 or 100Mbps home connection, but if the interesting & novel new uses for high bandwidth to the home show up, they will not radically change ordinary home users lives.

Once you get above about 6Mbps to the home, what makes a difference to the home user isn’t bandwidth, it’s data caps & quotas. If I have a 6Mbps internet connection with a high data cap (like Comcast’s 250GB cap), I can radically change how I consume information. If I have higher bandwidth connection but a low data cap (like a 2GB cap on a 3G/4G phone or the 50GB caps imposed by other ISP’s), I can’t fundamentally change how I consume information/media. That’s why I don’t care if my phone is 3G or 4G. In either case it’s still a 2GB cap, so It’s still a handicapped phone. Because it’s capped, It’s not capable of changing my lifestyle.

In short:

  • Low caps prevent high bandwidth from being a game changer.
  • Faster bandwidth is irrelevant unless it is accompanied by a higher cap/quota. Without a high cap/quota, we are not going to change how we access information.
  • Slower bandwidth with a higher cap can dramatically change how we access information, and therefor is more valuable than faster bandwidth with a lower cap.

As I’ve written before:

  • Network access should be ubiquitous. That means everywhere, no excuses.
  • Moderate speeds and ubiquitous coverage is more important than high speeds with sporadic coverage.
  • Low access costs are essential - under $40/month, for example. Tiered access is OK, if you want cheap, you give up fast.
  • Low end (4Mbps) broadband is like electricity, it’s a necessity, and even the poorest of society should have access to it.
  • There have to be reasonable quotas. Comcast’s 250GB/month quota is quite reasonable. Others are not.

Broadband access like the railroads on the prairie. When the railroads got built, you either made sure they went through your town or your town died. That’ll happen  with broadband too, communities that have incumbent telco/cable providers that do not deliver low cost, ubiquitous broadband will shrivel up and die, much like the communities that got bypassed by the railroads.

FWIW – At work I have GigE to the desktop connected to a GigE LAN that uplinks to a 10Gig backbone that is connected to multiple Tier 1 ISP’s at multi-gigabit or 10Gigabit speeds, but none of that has made any difference in how I work, how much work I get done or what I do at work.

What has made a difference?

VDI.

Friday, May 27, 2011

A new means of releasing software

From a recent conversation with a colleague, I learned that worms have been around a lot longer than I imagined:

.                      JOHN WALKER   JANUARY 1975
.
.
. THIS PROGRAM IS A TOTALLY NEW WAY OF DISTRIBUTING VERSIONS OF
. SOFTWARE THROUGHOUT THE 1100 SERIES USER COMMUNITY.  PREVIOUS
. METHODS REQUIRED THE DELIBERATE AND PLANNED INTERCHANGE OF
. TAPES, CARD DECKS, OR OTHER TRANSFER MEDIA.  THE ADVENT OF
. 'PERVADE' PERMITS SOFTWARE TO BE RELEASED IN SUCH A MANNER THAT
. IF SOMEONE CALLS YOU UP AND ASKS FOR A VERSION OF A PROCESSOR,
. VERY LIKELY YOU CAN TELL THEM THAT THEY ALREADY HAVE IT, MUCH
. TO THEIR OWN SURPRISE.

Self replicating software a decade before the Morris worm.

Cool.

One thing that I keep in the back of my mind is that even with nearly 30 years of computing experience, I’m still a newbie. There is a vast body of knowledge and experience that precedes me, and much of that is locked away in the archives of the minds of the brilliant people who invented the things that I use today. It’s unfortunate that only a fraction of that knowledge gets passed on from one generation to the next.

An example of this is my maternal grandfather. He was born early in the 20th century, grew up on a farm, worked as a machinist, factory worker and owned a small engine repair shop. He knew nothing about technology and probably didn’t know anything about combustion, yet he could listen to an engine for a couple seconds, pull out a screwdriver, turn a couple screws on the carburetor and make it run like new.

As a teen, he was responsible for heating the house. That meant spending the winter cutting down trees and splitting the logs needed to keep the house warm two winters into the future (the wood from the current winter had to dry a couple years before it could be used to heat the house). So he knew how to look at a tree, place the chain saw in the right place at the right angle and drop the tree right where it needed to land.

When I was a teen, we had various storms and other events on the family farm. Dozens of trees were damaged and need to be cut down. My grandfather couldn’t run the chain saw anymore (heart attack), but he knew how to drop trees. I could run the chain saw, but I had no clue how to drop a tree. (Or – I knew how to drop a tree…I didn’t know how to drop a tree in a predictable direction.)

You see where this is going…. Winking smile

As we walked the farm from end to end, he eyed up each damaged tree. Three puffs on his ever present pipe, an eyeball on the most off-balance, mangled, crooked tree around, and he knew how to make it fall. He pointed to a spot on the trunk and held his hand at the angle that I needed to hold the chain saw. As I wrestled the 1960’s vintage gasoline powered saw, whose power to weight ratio was far more weight than power, he tweaked the angle of his outstretched hand and I tweaked the saw to match.

Every single tree fell exactly where it needed to go.

And of course if the saw wasn’t singing the right melody, he held up his hand, I paused, he pulled the ever present screwdriver out of his shirt pocket, touched it to the carburetor and the chain saw magically changed it’s tune.

What I regret most is that I was only able to retain and use a tiny fraction of his knowledge. I’m sure that if I did that today without him guiding my saw, the trees would fall in a direction modeled by some function involving /dev/random, and the saw’s melody would pain the neighborhood.

This happens every generation, and certainly happens in system administration, development and the other foundations of our technology.

Too bad.

Wednesday, April 13, 2011

Government Remotely Disables Software on Personal Computers

The FBI remotely disabled software installed on privately owned personal computers located in the United States.

If this isn’t controversial, it should be.

The software is presumed to be malicious, having been accused of stealing account information and passwords from hundreds of thousands of people.

Does that make it less controversial?

Hundreds of thousands of computers have one less bot on them. That’s certainly a good thing. Hundreds of thousands of computer owners had their computers remotely manipulated by law enforcement. Is that a good thing? A dangerous precedent?

Interesting, for sure.

Update: Gary Warner has an excellent write-up.

Tuesday, April 5, 2011

Your package has arrived.

I'm impressed by this scam e-mail:
Return-path: <tracking@ups.com>
Reply-To: <tracking@ups.com>
From: UPS Shipments <tracking@ups.com>
Subject: Your package has arrived!
Date: Thu, 2 Dec 2010 14:31:34 +0000
To: Undisclosed recipients:;
Dear client<br />
Your package has arrived.<br />
The tracking# is : 1Z45AR990
*****749 and can be used at : <br />
<a href="
http://www.ups.com/tracking/tracking.html">http://www.ups.com/tracking/tracking.html</a><br />
The shipping invoice can be downloaded from :<br />
<a href="
http://thpguild.net84.net/e107_files/cache/invoice.scr">http://www.ups.com/tracking/invoices/download.aspx?invoice_id=3483273</a> <br />
<br />
Thank you,<br />
United Parcel Service<br />

<p>*** This is an automatically generated email, please do not reply ***</p>

UUCLJNFYSDMJENHSLBIXJFGSUGKCVUTDYVBOGM

I’ve snipped the delivery related headers (not interesting) and *’d out a bit of the tracking number. The links are intact.

What is interesting is that when rendered as HTML, the message contains valid URL's for all visible text, including the tracking URL. If click on the tracking URL and paste in the tracking number, you'll get some poor dudes house in Florida. If you click on what appears to be a valid link to an invoice, you have the opportunity to download what I assume is an interesting payload. (But alas, the golden hour has passed - those how amuse themselves by downloading interesting payloads will have to amuse themselves elsewhere.)

The finance people I know never met an invoice they didn't like. I'd imagine that for them, the temptation to click is overwhelming.

It’s not hard to make a case for reading mail in plain text.

BTW - Most bloggers mangle potentially hostile URL’s prior to publication. This blogger presumes that the readers of this blog are smart enough to know what’s safe and what isn’t.

Monday, April 4, 2011

Add Robert Half to the Epsilon Breech Fiasco

On my work e-mail:

Today we were informed by Epsilon Interactive, our national 
email service provider, that your email address was exposed due 
to unauthorized access of their system. Robert Half uses Epsilon 
to send marketing and service emails on our behalf. 

We deeply regret this has taken place and any inconvenience this 
may have caused you. We take your privacy very seriously, and we 
will continue to work diligently to protect your personal 
information. We were advised by Epsilon that the information 
that was obtained was limited to email addresses only. 

Please note, it is possible you may receive spam email messages 
as a result. We want to urge you to be cautious when opening 
links or attachments from unknown third parties. We ask that you 
remain alert to any unusual or suspicious emails. 

As always, if you have any questions, or need any additional 
information, please do not hesitate to contact us customersecurity@rhi.com. 

Sincerely,

Robert Half Customer Care 

Robert Half Finance & Accounting
Robert Half Management Recourses
Robert Half Legal
Robert Half Technology
The Creative Group

I did not ask to be on RH's e-mail list.

OS X Adaptive Firewall Automated Blacklisting

OS X Mini Server comes with an incarnation of 'ipfw' as its built in kernel firewall. Configuration of ipfw in an IPv4-only world is pretty simple. The Server Admin GUI covers the basics. The details are in /etc/ipfilter.

Along with the 'ipfw' firewall comes something called 'Adaptive Firewall'.  OS X's "Network Services Administration" indicates that this adaptive firewall 'just works':
Adaptive Firewall

Mac OS X v10.6 uses an adaptive firewall that dynamically generates a firewall rule if a user has 10 consecutive failed login attempts. The generated rule blocks the user’s computer for 15 minutes, preventing the user from attempting to log in.

The adaptive firewall helps to prevent your computer from being attacked by unauthorized users. The adaptive firewall does not require configuration and is active when you turn on your firewall.
Apparently my Mac Air is doing something to annoy the Adaptive Firewall on my mini. After a day of running ipfw, my Air looses the ability to connect to the Mini Server and 'ipfw show' shows a deny any for the IP address of my Mac Air. I have no clue why it's blacklisting me - I'm connecting via AFP, Samba and Time Machine, all of which work fine until they don't.

Fortunately I keep a handful of Windows 7 laptops around. They don't get blacklisted even when I try.

To tweak the adaptive firewall start with:


Then:

sudo cat /etc/af.plist

sudo cat /var/db/af/blacklist

And when you get tired of being blacklisted by your own server:

/usr/libexec/afctl -w 192.168.0.0/24

The adaptive firewall may (or may not) log to:

/var/log/alf.log

depending on various sysctl, socketfilterfw and serveradmin settings. As far as I can tell, mine doesn't log anything. Interesting things like 'I've blacklisted you" apparently are worthy only of /dev/null.

I bought the Mini Server for my home network because after a decade of running Solaris, I've decided that I want simple, straight forward technology at home so I can spend less time reading man pages and tweaking config files.

Thursday, March 10, 2011

Square & VeriFone, My phone accepts payment cards

Square allows you to turn your phone into a payment card terminal.

Cool. For a mere 2.75% overhead, a merchant can accept credit cards using a free magnetic card reader attached to your phone headset jack. Your customers swipe their card and scribble their signature on your iSplat’s screen, your bank account gets a credit.

The obvious questions: How do you secure a mobile application such that it can safely handle payments? Is your Square enabled phone now covered under some sort of compliance regime?

Square says they are secure, but they’ve loaded lots of weasel language into their User Agreement and Commercial Entity Agreement. (I don’t make a habit of reading merchant agreements though, so their language may be typical for the trade, but the part where they exempt themselves from any liability or damages caused by 3rd party trojans would concern me.)

VeriFone disagrees, claiming that the Square system is vulnerable to rogue mobile apps, and claiming to have (in an hour) written an app to exploit Square. But VeriFone is a competitor and FUD works, so we have to ask – is VeriFone any different? From what I read on their FAQ they encrypt in hardware and only use the phone for transmission of encrypted data, so they might be different. As expected, Square disagrees with VeriFone, but in their CEO’s carefully worded letter, makes no assertion as to the security of their application. 

I’d compare Square’s solution to running a card swipe terminal on the USB port of an ordinary desktop operating system & reading the card data with an Internet-downloadable application. The operating system must be presumed insecure (we have no evidence that any general purpose operating system has ever been invulnerable to exploitation), the payment card application hosted on the operating system can not (by definition) be more secure than the operating system, and unless the terminal performed some sort of encryption prior to sending the stream to the application, any compromise of the host OS would result in compromise of the card data.

But it is so convenient.

Update 08/05/2011: And insecure.

Wednesday, March 2, 2011

Temporal Juxtaposition - The future of mobile banking

E-mail from a colleague:

So, within minutes of one another:

Roundtable's Pitts: Mobile Will Connect Channels, Improve Security

"Mobile and banking fit together like chocolate and peanut butter," says Jim Pitts, project manager of the Financial Services Technology Consortium, the technology solutions division within The Financial Services Roundtable.
[ ... ]

Google Kicks Rogue Apps Out of the Android Market

"[ ... ] Before their removal, the apps garnered between 50,000 and 200,000 downloads. The apps caused the phone to perform functions without the owner's consent. The Trojan embedded in them used a root exploit to access all of the phone's data and download malicious code.

The publisher has been removed from the Android Market completely, and its apps reportedly have been deleted from phones, but this won't remove code that has been back-doored into a phone's program. Google reportedly is working on that problem.
[ ... ]

Awesome. We are going to bet our financial future on a rootable platform. I wonder how that will turn out.

I’m feeling déjà vu.

Tuesday, February 15, 2011

Somewhere in the OraBorg, an RSS feed is being updated

It’s Tuesday. My pre-OraBorg Google reader subscription shows a stream of security updates. Looks pretty bad:

greader

Wow – there are security vulnerabilities Mozilla 1.4, ImageMagick, a2ps, umount & a slew of other apps. I’d better kick our patch management process into high gear. It’s time to dig into these and see which ones need escalation.

Clicking on the links leads to sunsolve, the go-to place for all things Solaris. Sunsolve redirects to support.oracle.com. support.oracle.com has no clue what to do with the re-direct.

Bummer… I’d better do some serious research. GoogleResearch, of course:

google

2004, 2005, 2006…WTF???

Conclusion: Oracle is asking us sysadmins to patch five year old vulnerabilities. They must think that this will keep us from whining about their current pile of sh!t.

Diversion. Good plan. The borg would be proud.

One last (amusing) remnant of the absorption of Sun into to OraBorg.

Friday, February 11, 2011

Backup Performance or Recovery Performance?

“There is not a guaranteed 1:1 mapping between backup and recovery performance…” Preston de Guise, “The Networker Blog

Prestons post reminded me of one of our attempts to build a sane disaster recovery plan. The attempt went something like this:

  1. Hire consultants
  2. Consultants interview key staff
  3. Consultants draft recovery plan
  4. Consultants present recovery plan to executives

In the general case, consultants may or may not add value to a process like this. Consultants are in it for the money. The distinguishing factor (in my eyes) is whether consultants are attempting to perform good, cost effect work such that they maintain a long term relationship with the organization, or whether  the consultants are attempting to extract maximum income from a particular engagement. There is a difference.

On this particular attempt, the consultants did a reasonably good job of building a process and documentation for declaring and event, notifying executives, decision makers and technical staff; and managing communication. The first fifty pages of the fifty thousand dollar document we generally useful. They fell down badly on page 51, where they described how we would recover our data center.

Their plan was:

  • choose a recover site
  • buy dozens of servers
  • hire an army of technicians (one for each server)
  • simultaneously recover each server from a dedicated tape drive that came pre-installed in each of the shiny new servers.
  • complete recovery in fifty seven hours

To emphasize to the executives how firm they were on the fifty seven hour recovery, they pasted Veritas specific server recovery documentation as an addendum to the fifty thousand dollar plan.

Unfortunately, their recovery plan bore no relationship to how we backed up our servers. That made it unusable.

Reality: at the time of the engagement:

  • we did not have a recovery site
  • we had not started looking for a recovery site
  • we did not have one tape drive per server. All backups were multiplexed onto four fiber channel attached tape drives
  • we did not have Veritas Netbackup, we had Legato Networker
  • we could not recover individual servers from individual tape drives. All backups jobs were multiplexed onto shared tapes
  • we could not recover dozens of servers simultaneously. All backups jobs were multiplexed onto shared tapes

Unfortunately, the executive layer heard ‘fifty seven hours’, declared victory and moved on.

I tried to feed the consultants useful information, such as the necessity of having a the SAN up first, the architecture of our Legato Networker system, the number of groups and pools, the single threaded nature of our server restores (vs the multi-threaded backups), the improbability of being able to purchase servers that exactly match our hardware (hence the unlikelihood of a successful bare metal recovery on new hardware), not having recovery site pre-planned, not having power and network at the recovery site, and various other failures of their plan.

You get the idea.

The consultants objected to my objections. They basically told me that their plan was perfect, and that it was proven so by it’s adoption by a very large nation wide electronics retailer headquartered nearby. I suggested that we prepare a realistic recovery plan, accounting for the above deficiencies, and that plan be substituted for the ‘fifty seven hours’ part of the consultants plan. The declared me to be a crackpot and ignored my objections.

Using what I thought were optimistic estimates for an actual recovery I built a marginally realistic Gantt chart. It looked something like this:

  • Order all new hardware – 48 hours. Including an HP EVA SAN and fiber channel switches, an HP GS160, DLT tape changers, A Sun E10K and miscellaneous SPARC & Wintel servers. Call in favors from vendors, beg, borrow or extra-legally appropriate hardware as necessary. HP had a program called ‘Recoverall’ that would have facilitated hardware replacement. Sun didn’t.
  • Locate new site – 48 hours. Call in favors from other state agencies, the governors office, other colleges and universities, and uncle Bob. Can be done in parallel with hardware ordering.
  • Provision new site with power, network, fiber channel – 72 hours. I’m optimistic. At the time (a half dozen years ago) we could have brought most systems up with a duct tape and bailing wire for a network, skipped inconveniences like VLAN’s and firewall rules. used gaffers tape to protect the fiber channel runs, etc.
  • Deliver and install hardware – 72 hours. (Optimistic).
  • Configure SAN fabric, zoning, LUN’s, tape drives, network – 12 hours.
  • Bootstrap Legato, connect up DLT drives, recover indexes – 8 hours.

Then (roughly a week into the recovery) we’d be able to start recovering individual servers. When estimating the server recovery times, I assumed:

  • that because we threaded all backups into four tape drives, and because each tape had multiple servers on it, that we’d only be able to recover four servers at a time.
  • that a server recovery would take twice as long as the server backup
  • that staff could only work 16 hours per day. If a server finished restoring while staff were sleeping, the next server recovery would start when staff woke up.

Throw in a few more assumptions, add a bit of friction, temper my optimism, and my Gantt chart showed three weeks as the best possible outcome. That’s quite a stretch from fifty seven hours.

The outcome of the consulting gig was generally a failure. Their plan was only partially useful. If we would have followed the plan, we would have known whom to call in a disaster, decision makers, communication plans, etc.,but we would not have had a usable plan for recovering a data center.

It wasn’t a total loss though. I used that analysis internally to convince management that given organizational expectations for recovery vs. the complexity of our applications, a pre-built fully redundant recovery site was the only valid option.

That’s the path we are taking.

Wednesday, February 9, 2011

Tipping Point Vulnerability Disclosures–IBM Incompetence?

Last August, Tipping Point decided to publically disclose vulnerabilities six months after vendor notification. The six months is up.

Take a look at the IBM’s vulnerability list and actions taken to resolve the vulnerabilities. If you don’t feel like reading the whole list, the snip below pretty much sums it up:

Timeline:
[08/26/2008] ZDI reports vulnerability to IBM
[08/26/2008] IBM acknowledges receipt
[08/27/2008] IBM requests proof of concept
[08/27/2008] ZDI provides proof of concept .c files
[07/12/2010] IBM requests proof of concept again and inquires as to version affected
[07/13/2010] ZDI acknowledges request
[07/14/2010] ZDI re-sends proof of concept .c files
[07/14/2010] IBM inquires regarding version affected
[07/19/2010] IBM states they are unable to reproduce and asks how to compile the proof of concept
[07/19/2010] ZDI replies with instructions for compiling C and command line usage
[01/10/2011] IBM states they are unable to reproduce and requests proprietary crash dump logs

Tipping Point: Two Thumbs Up.

IBM: Two and a half years. Still no clue.

What IBM’s executive layer needs to know is that people like me read about the failure of their software development methodology in one division and assume that the incompetence spans their entire organization. That may not be fair – IBM is a big company and it’s highly likely that some/most software development groups within IBM are capable/competent. However – if one group is allowed to flounder/fail, then it’s clear to me that software quality is not receiving sufficient attention high enough within IBM to ensure that all software development groups are capable/competent. If some/most software developed within IBM is of high quality, it’s because some/most software development groups believe in doing the right thing. It’s not because IBM believes in doing the right thing.

In other news, IBM’s local sales team is aggressively pushing for me to switch our entire Unix/Database infrastructure from Solaris/Oracle on SPARC to AIX/DB2 on Power.

Guess what my next e-mail to their sales team is going to reference?

Sunday, February 6, 2011

Well formed Comcast phishing attempt - “Update Your Account Information”

A well formed e-mail:

Email

No obvious spelling errors, reasonably good grammar, etc. One red flag is the URL to the Comcast logo, but I wouldn’t bet on users catching that. The embedded link is another red flag:

http://login.comcast.net.billings.bulkemail4sale.com/update/l0gin.htm

[s/0/o/]

But one that would fool many. Users will not see that URL unless their e-mail client has the ability to ‘hover’ a link destination.

The ‘login page’ is well formed & indistinguishable from Comcast’s Xfinity login page:

 Login

All the links in the bogus login page (except the form submit) go to real Comcast URL’s, the images are real, the page layout is nearly identical. The only hint is that the form submit doesn’t post to Comcast, but rather  to[snip].bulkemail4sale.com/Zola.php:

Zola

Zola.php? Hmmm…

Filling out the bogus login page with a random user and password leads to a “Comcast Billing Verification” form requesting last, middle & first names, billing address, credit card details including PIN number, card issuing bank, bank routing number, SSN, date of birth, mothers maiden name, drivers license number, etc…

The “Comcast Billing Verification” form is very well constructed, generally indistinguishable from normal Comcast/Xfinity web pages. The submit action for the “Comcast Billing Verification” form is:

Hacker

Hacker.php? This is not going to end well.

This is a very well constructed phishing attempt. Impressive, ‘eh?

It took me a bit of detective work to determine the non-validity of this phish. Ordinary users don’t have a chance.

Where is anonymous when you need them?

Tuesday, February 1, 2011

The benevolent dictator has determined…

…that you are not qualified to decide what content you read on the device you’ve purchased.

If the New York Times story is true, Apple is rejecting an application because the application allows access to purchased documents outside the walled garden of the iTunes app store.

“Apple told Sony that from now on, all in-app purchases would have to go through Apple, said Steve Haber, president of Sony’s digital reading division.”

I keep thinking that there’d have been an outcry if Microsoft, at the height of their monopoly, had exercised complete control over the documents that you were allowed to purchase and read on your Windows PC’s.

Thursday, January 27, 2011

$100 million dollars per mile and no redundancy?

“Light-rail service throughout downtown Minneapolis was halted Thursday for about four hours because of a downed wire that powers the trains from overhead…”

Apparently there is no redundancy.

I’m not thinking about this because I care about the commuters who were stranded, but rather because of how it relates to network and server redundancy and availability.

My group delivers state wide networking, firewalling, ERP and eLearning applications to a couple hundred thousand students and tens of thousands of employees.

  • Availability is expensive
  • We hear about it when our systems suck
  • We have no data that can tell us how much an outage costs. We are an .edu. Our students don’t switch vendors if they can’t access our systems for a few hours.

In that environment, how do you make a cost vs. availability decision?

Anecdote:

Years ago (cira 2001) we found a carrier that would offer us OC-3 (150mbps) for what was essentially the same price as the incumbent telco (Qwest) would charge us for two T1’s (3mbps). The new carrier had less experience with data. They were a cable TV provider.

  • We expected availability to be worse than the incumbents telco class T1’s.
  • We knew full well that the carrier had a single point of failure on a fiber path through a small town, and if there was a fiber cut in that town, 50,000 students would get knocked off the network. Unlike most carriers, they told us where they were weak.

What decision would you make?

I hesitated, figuring that it’d be best to get feedback from the campuses that were currently starved for bandwidth, so I posed the question at our quarterly CIO meeting.

Me: “If I have to make a choice, would you trade availability for bandwidth?”

CIO’s: “Yep.”

(At least that’s how I remember it…)

The carrier deliver excellent service and availability. Service was awesome, availability was at least as good as the expensive incumbent. We gained confidence in the low cost carriers ability to deliver, and year by year we migrated more campuses to the low cost carrier.

Life was good. We traded 3Mbps for 150mbps without increasing the budget. Hail to the hero (that’d be me).

Fast forward 4 years. The small town in the path of the low cost carrier’s non-redundant leg decided to build a road. The town cut the fiber, a dozen campuses disappeared from our network map for 8 hours, and to put it mildly, the campuses were not happy.

It gets better worse though. The carrier patched up the busted fiber late that day then scheduled a plow crew to come out a week later, bury a new fiber parallel to the old fiber and move us to the new fiber.

Yep, the carriers crew cut their own fiber. Another half day outage.

It gets even better worse. The carrier didn’t have facilities all they way to our core where we needed them so they leased a big carrier’s circuit to the big city. A few weeks after the two fiber outages, the big carrier in the big city smoked a network interface.

Outage number three. Let’s all throw rocks at the goat (that’d be me).

The next quarterly meeting wasn’t fun.

Suffice to say that we now make a different calculation on the relative value of bandwidth vs. availability.

Back to the broken train. A four hour outage, construction cost of a hundred million dollars per mile and no redundancy?

Damn.

Monday, January 17, 2011

LeanEssays: A Tale of Two Terminals

Mary Poppendieck's LeanEssays: A Tale of Two Terminals compares the smooth opening of Terminal 3 at Beijing Capital Airport with the rough opening of Heathrow Terminal 5. 

A great read for those who've been at the tail end of a long, complex, schedule slipping scope creeping IT project (or for those who have been at the head end of a long, complex IT train wreck).

Via: Tom Limoncelli, Testing is a waste of time.