jump to navigation

Staying Out Of Holes February 18, 2009

Posted by Chuck Musciano in Leadership.
Tags: , , ,
1 comment so far

Since the dawn of computing, we’ve worked really hard to make technology easier and more accessible.  Computers started out in protected data centers, with mere mortals kept far, far away from actually using the machines.  Today, we’ve pushed powerful tools into the hands of end users that enable them to do all sorts of amazing things on a regular basis.

As users become more comfortable with these tools, they try to acquire more of them.  That’s a great thing, until those well-meaning end users get in over their heads and wind up holding a technology tiger by the tail.

Let’s be honest: computers, especially enterprise computing systems, are inordinately complicated.  They are not easy to buy, install, configure, or maintain.  It takes a a team of experienced professionals to make sure that a company buys the right systems, deploys them correctly, and maintains them for maximum business advantage.  When end users try to take that on themselves, disaster invariably ensues.

Every CIO can tell a story about some non-IT organization that tried to buy some cool system without bringing IT into the picture.  Typically, the first call comes about halfway into the implementation, when the project is behind schedule, the gory details are being exposed, and the poor users have no idea how to get out of the hole they have dug for themselves.  By the time IT gets involved, lots of money and time has been wasted, and the cost of recovery far exceeds the project estimates and often outweighs any potential benefits of the system.

It is easy to blame these scenarios on the users.  The real blame lies with IT.  We need to build trust with our users so that they feel comfortable turning to us when they need a new system or have a problem to solve.  The worst situations occur when IT is so inaccessible and arrogant that users prefer the pain of a bad implementation to the pain of dealing with IT.

Beyond earning trust, we also need to educate our users so they understand why our systems work the way they do, and how we integrate new technology to benefit everyone.  Systems architecture is of little interest to end users, but we must teach them how we fit all the pieces together so they can see how we bring all these conflicting systems together.

Finally, IT brings a lot of non-technical benefits to any technology acquisition.  In my experience, users make a good effort at finding a tool that has the right featurs to meet their needs.  Where they completely miss the mark is with the contract and service details around the purchase.  Users have no idea how to negotiate good pricing, or how to see through the smoke a vendor may be blowing their way.  They don’t know about service level agreements, or good maintenance pricing, or how to write a contract that indemnifies them against a product failure.  They don’t know how to evaluate a vendor for financial stability, or to know if their solution is a risky leading-edge idea or an outdated platform on its last legs.  We know all these things, and we need to provide that assistance to our users.

Like almost every other aspect of our job, it starts with communications and trust.  Begin by reaching out to users when they aren’t facing big problems.  Calmer times give you the opportunity to explain what we do, why we do it, and how we can help.  When users do reach out to us, bend over backwards to help them navigate the world of technology.  Respect their needs and take time to figure out what they really need.  Work hard when users aren’t in a hole, and you’ll eventually keep them from digging a new one.

Another Ancient Artifact November 3, 2008

Posted by Chuck Musciano in Random Musings, Technology.
Tags: ,
add a comment

I had another “really old” moment with my son the other day.  My first job out of college was with Harris Corporation, and I was explaining how Harris evolved from a company called Radiation.  Back in the 1950s, Radiation got its start building telemetry equipment for the space program.  I told my son that it was very clever technology for the time, capturing real-time data from rockets and recording it on magnetic tape.

And then I got the blank look.  “Magnetic tape?  What’s that?”

Certainly we haven’t reached this point with magnetic tape, have we?  I scrambled for some common point.  Finally I settled on cassette tapes.  “Remember how we used to have those cassette tapes?  The tape in them is magnetic tape.  It’s plastic, coated with iron oxide, and you can record data and music on it.  The telemetry was recorded on tape like that, but wider.”

My son nodded in understanding, but it was clear that this was a distant memory, at best.  And why not?  He grew up in the tail end of the CD era, the last physical media we’ll probably ever know.  He manages his data online, shuttled between various devices via networks both large and small.  He still likes to buy CDs for the cover art and liner notes, but immediately rips them to iTunes and puts the CD on his shelf.

I’m proud to report that I actually have a nine-track, 6250 bpi tape.  (That’s bits per inch, by the way.  Much denser than the old 1600 bpi tapes.)

When I moved from my first job at Harris (writing compilers) to my second (researching parallel computer architectures) I dumped all my mainframe programs to tape in case I would ever need them again.  Fat chance!  I’ve never read that tape, and I’ve never had a need for a crucial snippet of PL/I to complete a project.  But I still have that tape because, well, you never know if the need will arise.  Now, I just need to track down a nine-track, 6250 bpi tape reader.  And a matching channel controller for it.  And an IBM mainframe.  And a 3270 console.  Ebay, perhaps?

Head In The Clouds June 19, 2008

Posted by Chuck Musciano in Technology.
Tags: ,
add a comment

The latest rage in the world of IT is “cloud computing.”  The “cloud” is the internet, often represented as an all-connected puffy blob in countless network diagrams and PowerPoint presentations.

Cloud computing moves your applications away from your local servers and desktops and houses them on servers located in the cloud.  Managed by great, benevolent entities like Google, Amazon, and Microsoft, your systems will run better and faster. As butterflies dance around your worry-free head, you’ll be able to focus on your “core competencies,” whatever they may be.

Hmmm.  Centralized computing services with local display technology.  Where have I heard of this before?  Oh, that’s right!  We used to call it “mainframe computing!”  And that local display technology?  A 3270 terminal!  In the ’80s, we built dedicated display devices called X Terminals and used them to connect to centralized servers, where we would run our applications.  In the ’90s, we deployed “thin client” devices, moving the storage to the server but shifting the computing power to the device.

Those who forget history are condemned to repeat it.

Still using any of these?  Of course not.  If we have learned one thing in the past 50 years of computing, it is that users demand more and more local power, control, and capability.  With that power they will do new and unforeseen things that will dramatically alter how we use information.  Every effort to pull that power in, to restrict what people do, has failed.  Trying to pull applications off the desktop and run them remotely may be possible technologically, but it will never succeed socially.

I say this even as I continuously try to standardize and manage a far-flung IT infrastructure for my company.  The difference?  I accept that there will be local applications and capabilities.  My standards seek to embrace and manage that local element, instead of trying to pull it back and eliminate it.

Don’t misunderstand: you can shift certain services and capabilities to the cloud with great success.  My company has outsourced several business processes to external service providers.  My personal data at home is backed up to an external service called Mozy, which works very well.  This blog runs on WordPress.com, instead of some server I manage myself.  My personal email is externally hosted as well.

The idea of moving all of my personal data to the cloud and accessing my applications there is incomprehensible.  Imagine doing everything (everything!) at the speed of your current internet connection.  I have several thousand photos on my laptop at home.  I manage them with Adobe Photoshop Elements, which provides a fast, high-fidelity interface that lets me flip through hundreds of pictures in a few seconds.  Ever tried that on the web?  Go to Flickr and try to preview a few hundred pictures.  That’s an enjoyable experience.  Now extend that to hundreds of documents that you’ll want to edit and manage.  No way.  Word and Excel are slow enough running locally; they (or their equivalent) will never be better at the other end of a long wire.

The speed problems aren’t the real problem. People like to use their computers anywhere, anytime.  High-speed connections are not pervasive, and your cloud computing experience is only pleasant at very high speeds.  It stops entirely when the connection breaks.  Cloud proponents are struggling to provide an offline equivalent of their services so you can keep working while disconnected.  Here’s a thought: since they cannot predict what you might want to do while offline, you’ll probably want to keep a copy of everything you need on your local machine.  You know, just in case.  And you’ll probably need to keep copies of the applications as well, so you can access your data.  After all, data is useless without the application.  Let’s see: local storage, local data, local application, local display and keyboard…  it’s like your own personal copy of the cloud, but you can use it anywhere, anytime.  We’ll call it… the Personal Computer!

No Free <Lunch> June 18, 2008

Posted by Chuck Musciano in Technology.
add a comment

I’ve noticed a disturbing trend in sales pitches and product literature these days.  When I ask if a particular product can easily import or export data with our existing systems, vendors often reply, “Of course!  We can export XML!”

XML, for those readers with actual lives, stands for eXtensible Markup Language.  It is a way to express data in a way that the data can be processed and managed in fairly standard ways.  Essentially, you surround your actual data with keywords, attributes, and plenty of angle brackets to make it more understandable by computers and humans.

To hear some people tell it, anything expressed in XML is instantly recognizable by any other computer anywhere on earth.  In fact, if you place two systems that use XML at opposite ends of your data center, by the next day they’ll have met in the middle, network cables and power cords wrapped around each other in an XML-inspired embrace.

Please.  As we like to say in the computing business, “bits is bits.”  Data, no matter how it is represented, can only be understood by a system that has been explicitly programmed and tested to process that data. XML may make the data easier to process, but someone still has to write, test, and support that code.  And in many cases, XML makes things more complicated.

For example, today is June 18, 2008.  Here is one way to represent that date for transmission between two systems:


I’ll bet most of you have decoded this particular data representation: four-digit year, two-digit month, and two-digit day.  Here is the same date in a bit more old-school format:


Slightly more cryptic, but not too hard to program: the first two digits are the year and the next three are the day of the year (June 19 is the 171st day of 2008).  Notice the retro, pre-2000 two-digit year?  It’s like shag carpeting for programmers!

Here is the date in one potential version of XML:

   <month type=numeric>6</month>

More understandable? Maybe.  Self-documenting?  Sure.  Easier to read, parse, and decode?  No way.  You’ll need an XML parser, a definition document for this version of XML (known as a DTD), and a competent developer to make sense of this particular data stream.

When all is said and done, very little in computing is inherently easy or automatic.  At every level, someone is designing, building, and testing all the little pieces that make that level work.  You may build on that level, but you’ll have issues of your own to deal with.  Never underestimate the difficulty in making systems play well together, and never believe what the salesmen say without digging into a details first.

In The Beginning… January 24, 2008

Posted by Chuck Musciano in Random Musings, Technology.
Tags: ,
add a comment

For me, my computing career started in the fall of 1975. Up to that point, my natural affinity for math and science seemed to be leading to the glamorous world of nuclear physics. Who wouldn’t want to spend their days building bombs and reactors? It was hard to imagine anything more exotic or enticing.

Then I saw it. Tucked in the corner of my high school’s mezzanine was the coolest device I had ever laid eyes on: an ASR33 teletype. Noisy, oily, built like a tank, it was attached to an acoustical modem that, in turn, dialed out to nearby Princeton University. Our high school had an account on the University system that could be used to run BASIC programs. My math teacher, Mrs. Horvath, taught simple computer programming to some of her higher classes. She invited me to try it, and from the moment my fingers touched the keyboard, my life was changed.

My first program allowed you to type in three numbers, after which it would print out the largest of the three. The whole idea of programming, of figuring out sequences of instructions to accomplish some larger goal, was absolutely fascinating. Although I wasn’t in a class that was actually learning to program, Mrs. Horvath let me use the system after school. I’d spend hours writing programs for everything I could think of.

The ASR33 was wonderful. It printed in uppercase only, on rolls of yellow teletype paper. The print carriage used a cylindrical type head that pounded out the characters, and a piston and cup arrangement caught the printhead as it slammed to left on each carriage return. You could lose a finger if you stuck your hand inside at the wrong moment. When you sat down at that terminal, you knew you were using a computer!

The ASR33 had a paper tape punch/reader, which let you punch your programs to tape without dialing in, saving connect charges. After punching your tape, you’d dial in, feed the tape back in, and quickly enter and save your program. Thus the acronym ASR: the paper tape allowed for Automatic Send Receive. (The lesser model, the KSR, allowed only real-time Keyboard Send Receive).

I can still recall the smell of the ASR33, and the separate, slightly oily smell of the paper tape. The big round keys would travel at least a quarter-inch when you pressed them, and touchtyping was pretty much out of the question. Beyond the chunka-chunka-chunk sound of printing, the only other noise it made was a real bell that would chime. None of this mattered: it was a real computer, and it ran real programs.

I wrote all sorts of programs, from maze generators to a Battleship game to graphing tools and even a program that drew hydrocarbon molecules after you gave it the chemical formula (I’d like to see today’s web hotshots do that on a teletype!). I built a database that tracked our wrestling team’s statistics and another program that generated random music. You couldn’t play the music on the ASR33, of course, but it did print out the complete score so that you could then play it on a piano.

The system also handled FORTRAN and PL/I programs, and I dabbled a bit in those languages as well. Is there anyone left who can still recall typing “PROC OPTIONS(MAIN)” to start out their program?

I have written millions of lines of code since then, for more systems than I can count, but the joy of using that first system still resonates in my soul. I knew then that I’d be playing with computers for the rest of my life. I wonder if those who are just starting in our industry today have similar memories of their first machines. In some ways, the best part of that ASR33 was that it was so primitive; getting it to do anything was a major accomplishment. It’s so easy to do cool things with systems today; is the experience less fun and inspiring as a result?