It's funny that I'm considering the possibility of being a luddite, given how I work in tech and generally love gadgets. However, as I've gotten more "mature," I've become somewhat more skeptical about the latest and greatest technologies. It's not like when I was just a "dumb kid" and fell for anything that I thought looked awesome in the store or in a magazine (mostly with regards to video games). I mean, when I first saw demo units set up for the Phillips CD-I (geeze, I was dumb) or 3DO (okay, less dumb here, but that machine really didn't go anywhere either), I for some unfathomable reason wanted those consoles (using the term very loosely with respect to the CD-I). This probably culmonated in my purchasing of a Sega CD, Sega Saturn, and Sega Dreamcast (I was a huge Sega fanboy back in the day and probably still would be, to a much more reasonable extent, if they still made consoles). Hey, at least I didn't buy a 32X! Anyway, being somewhat burned by these purchases (although all of them gave me some of my all time favorite gaming experiences, so it wasn't all bad) left me of the opinion that should probably wait a bit before purchasing new technologies just to make sure they aren't half baked and won't be abandoned by their makers (which admittedly I haven't always followed — I did buy a generation 1 iPhone, but only after the price drop).
Other techs I've been very cautious about included cell phones back around 2000-2001, though this was partly due to them still being pretty expensive and service being pretty crappy, especially with the first cell phones I used back in the 90's. My first cell phone plan was therefore very limited — I basically only got enough to use it for emergencies or nights and weekends while out/on the road and still primarily used a land line. Now, my cell phone has become my primary phone, but I still keep a land line around for backup use as reception at my house is sometimes spotty (it also comes out slightly cheaper as part of a bundle with my internet connection than if I got the connection without the land line).
Web-based e-mail is also something I was slow on the uptake with, although part of this was due to how much I loved using mutt on a shell account as my primary email client. However, when that accounts spam filters just weren't cutting it anymore, I fired up my dormant Gmail account and switched to that (with appropriate forwarding set up). At first, I primarily used it via IMAP interfaces to mutt and Thunderbird, but later I just went whole hog to the web interface and pretty much never went back. Google eventually added enough features to their web interface that I didn't feel the need to use an IMAP client anymore to get the functionality I wanted for just about all the emails I send and receive. The convenience of being able to access my mail on any computer with a reasonably up to date browser also was very nice.
Now, webmail is probably my first entry into part of the grand scheme now called "cloud computing." Other examples of cloud computing include Google Docs (and its various other online services, like Reader), Amazon EC2 and S3, and other cloud services. Despite spending some time working on cloud computing infrastructure software at a former job (which I left for reasons other than my opinion on cloud computing as a whole), I never quite got what the big deal of it was, other than the fact that "it's the hot new thing and it's probably not a bad idea to be working on it while it's still hot."
Part of my skepticism on cloud computing is that I've seen it hyped before, though under different names: utility computing and grid computing, both of which failed to get any market acceptance. At their cores, they both basically offer the same thing as cloud computing: instead of having your own data center, you instead lease storage and CPU time from a service provider. Cloud computing's main difference is that a lot of it takes place over the public Internet instead of a private network between the customer and the service provider, but it's still basically the same concept.
There is also the talk of "private clouds." This is the idea that a large enough organization that doesn't want to use an outside service provider for some reason (perhaps because they aspire to be a cloud provider themselves) acquires their own cloud infrastructure and manages their own internal cloud that may be distributed across many of their data centers. Hmm... this sounds almost like the 1960's-70's era notion of "big mainframe in the basement with lots of terminals accessing it everywhere else" with the addition that the "mainframe" can be distributed across multiple geographic locations. Now don't get me wrong — some applications of private cloud technology are great. The ability for a geographically dispersed large organization to easily mirror their data across all their locations is definitely useful. Being able to spread virtual machines anywhere across a company's global network to better balance CPU and network utilization is also a very useful feature. However, most of my cloud skepticism is related to cloud computing as provided by a service provider.
The main issues with using a cloud service provider are "How much do I trust my service provider with my data?" and "How reliable is my internet connection?" I can definitely see the possibility of people/organizations not wanting to use a service provider for highly sensitive data (financial data, proprietary blueprints/source code, etc.) due to not being sure how secure the service provider's infrastructure is. You also have to rely on the provider's infrastructure always being reliable so that you can actually get at your data when you need it. Similarly, while internet connections are getting better and better, 100% reliable, ubiquitous, fast, and cheap internet still doesn't exist (at least not for everybody). Once again, I wouldn't want to keep my critical data on an outside site that I may not be able to reach when I absolutely need to access it.
Still, I do use a fair number of cloud applications, but I'm pretty judicious in which ones I use and how I use them. Webmail is a notable example. Google Reader is another favorite. Google Docs is awesome for collaborating with other people across the 'net (although I still use an offline office suite for most of my personal document needs). I also use an online backup service to do offsite backups of my data. What do all these apps have in common? They all are applications that require the internet to use anyway such that if I don't have an internet connection, I wouldn't be able to do what I wanted even if it wasn't on the cloud. For anything that I can do offline, I almost always choose to do it offline. The classic example is an offline database application I put together for managing panels at a Japanese animation convention I staff. While talking to another con's panel organizer about collaborating, he suggested he would make a nice webapp for doing the same thing and we could work together, given my experience, to make sure it had all the necessary features to do the job. While I didn't get the chance to express my reservations about such a system, my main issue was that internet connectivity at just about every convention I've been to was either too unreliable or too expensive (IMHO — please keep in mind I'm a cheap bastard) for me to feel comfortable with using a webapp to manage panels.
That is where I see the future of cloud computing. I don't see it ever fully supplanting offline applications. Instead I see a future of mixed offline/cloud environments where applications that are a natural fit for the cloud eventually do migrate completely to the cloud whereas applications that for security, performance, or lack of connectivity reasons are a best fit for offline use remain offline.Share on Twitter Share on Facebook