Marketing: Art or Science? --- part I
Like music, marketing is both an art and a science
Ask ten people about what marketing is and you will get ten different answers depending on whether they think about promoting high volume consumer items, advertising fancy automobiles, luring corporations into buying new computers or anything in between.
Here is how Merriam Webster defines it: “mar.ket.ing n (1561) 1 a: the act or process of selling or purchasing in a market b: the process or technique of promoting, selling, and distributing a product or service2: an aggregate of functions involved in moving goods from producer to consumer.”
While Webster leans towards the scientific aspects, I tend towards the artistic side. My definition is shorter: “Marketing is the art of communicating with people … to generate business”. There are four key words is this sentence:
Art – Marketing is more that a science or a discipline. Like music, it requires basic knowledge and technical skills together with creativity and constant training. Creativity comes on top. And, as in music, new technologies play an important role.
With – In the past, marketing was mainly about broadcasting and hammering a unique proposition to masses through a few limited channels. Today, marketing is increasingly about communicating one-to-one with specific individuals or groups and about interacting with people (two-way communications).
People – The ability to interact with specific groups and individuals allows marketing to address, beyond just customers, other influential people such as employees, investors, opinion makers and other similar categories.
Business – Marketing is not about spreading yet another gospel. It is about making money. Even for charities, the objective is to bring in contributions. This is probably where marketing diverges from being an art. Many artists worry more about expressing feelings than about becoming famous and making money, don’t they?
The mills of the digital revolution
The rise of IT server farms brings us back, full circle, to the beginnings of the industrial revolution
Further to my entry of November 6, I read another interesting article on large IT server farms in the October issue of WIRED Magazine: “The Information Factories”, by George Gilder.
The article describes the latest server farm being built by Google in The Dalles, Oregon. You have to read it if you are interested by the progress in computers, data storage systems and high-speed communications; the units used are petabytes and petabits per second. Yet, what fascinated me is the choice of this remote location as site for the farm. Gilder’s description looks like the script of a Sergio Leone movie. However, this remote valley of Oregon was selected for two reasons that have little to do with its Far West landscape: [1] Input: energy - the availability of cheap electrical power and [2] Output: distribution channels - the proximity of an ultra-fast fibre-optic communications hub.
The Columbia River Gorge is locked by a half-mile long dam that feeds a 1.8-gigawatt power plant, once essential to aluminium smelting. Its electricity is charged at about a fifth of what it would cost in San Francisco. This will represent huge savings for Google when a million (or so) servers will populate the farm. The second selection criteria, a 640Gbps hub linked to the costal fibre-optic superhighway, is as critical for it makes this fantastic computing power easily accessible from anywhere in the world.
Now, flash back to the 18th century. Emerging factories are located as close as possible to sources of cheap energy, initially water mills on rivers with a high rate of flow. In additions, the factories must also benefit from efficient distribution channels for the goods they produce, first the canals and then the railroads. What a coincidence!
So, although the term ‘server farm’ suggests some bucolic comeback to the agricultural age, we’re brought back, full circle to the industrial age, but not necessarily to the smokestacks of the past. Maybe most server farms will be built in valleys, canyons and gorges close to mountains and amid ‘real’ animal farms so that the few technicians in charge of the IT monsters will see cows grazing in the nearby fields, like in the picture illustrating the WIRED article.
Maybe my home country, Switzerland, will see a high concentration of IT power taking advantage of all hydro-electric plants installed in the Alps (and the Swiss cowboys will yodel Yahooooooooooooooo while checking the wholesale price of milk on their Blackberries ;-).
How much in-house IT support do you need?
Hundred years ago, in the early 1900s, if you wanted electricity, you had to do it yourself. Businesses had to put a generator in their basement and employ specialists to take care of the whole installation. In America, major corporations had their own VP of Electricity ***.
During the late 1800s and in the early 1900s, electrification was fragmented and individualised. In 1900, London was characterised by different currents, voltages and wiring systems that created confusion and hampered progress. Early generators produced direct current, which could only serve a small area because of unacceptable power losses over long distances. So each town area, factory or department store had its own steam-powered electricity generator. Early electric motors were custom built, installed and then monitored for problems and, therefore, competent engineers who could install and maintain a whole system for lighting and power were the critical resource of these times.
Today, you only call an electrician when you have a problem. Electricity is now a commodity governed by a set of standards that enable easy distribution and interoperability between different lighting systems, motor-powered devices and other appliances.
The same evolution is happening with Information Technology (IT). Its main infrastructure - hardware and system software – is already a commodity. Major vendors use standard hardware and software components and test mutual interoperability, as well as compliance with official and de facto standards.
So, maybe it’s time to entrust outside specialists with the task of supporting your IT infrastructure and to focus your in-house IT staff on the applications that really differentiate your business from your competitors’. What do you think?
*** See “Electrifying America, Social Meanings of a New Technology, 1880-1940” – David E. Nye, The MIT Press, 1992. This book offers a fascinating description of how electricity revolutionised lighting, transportation and power in homes, streets, stores, factories and farms.
Internet Power … Plants
Internet’s power is power hungry
Very interesting article in the August 1st issue of FORTUNE Magazine: “Behold the server farm! Glorious temple of the information age!” It’s about the Internet backstage; about the millions (yes, millions!) of elementary computers (a.k.a. servers) that power sites like Google, MSN and Yahoo.
Before reading this article, I thought that a website like Google was powered by a few regional installations made of a few thousands servers, each in a 2 cm thick ‘pizza box’ enclosure, staked in 2 meter tall racks (each rack holding about 100 computers). Behind each rack, a spaghetti junction of optical fibre and power cables. The whole shebang in a room the size of a basketball playground, with some decent air conditioning.
My vision was supported by the fact that the industry uses the term ‘server farm’ to describe such installations. You imagine some bucolic and environment-friendly set-up where ‘brick and mortar’ is insignificant compared with ‘point & click’, don’t you?
Wrong! Google’s newest sites contain an estimated half-million to one million servers. Forget about the agricultural analogy with gentle ploughing, sowing and harvesting cycles. We’re talking hard ‘stuff’ here. As serious as industrial age’s mills, mines, furnaces, foundries, workshops, factories and plants. Not so much smoke but certainly a lot of heat. Powering the Internet requires lots of (electrical) power, tens of megawatts per ‘farm’.
The paradox is that the information age seems to spend as much energy to cool its installations as it does to power its servers. Isn’t it like driving with one foot on the accelerator and the other on the brake pedal?
Design Yardstick: Gauging the Ultimate Computing Machine
Using the Design Yardstick (introduced on August 19) to analyse and illustrate the evolution of Apple’s Macintosh.
I just realised that, in the entry dated August 25 [ Gauging a PC ], I said that I would illustrate the time dependency of the yardstick ‘readings’ over the life cycle of a product family. So, here it is. Let’s take the original Macintosh of 1984 launched with the support of the famous “ 1984 won’t be like ‘1984’ ” ad (referring to A. Huxley’s book). From a customer point of view, the 1984 Mac looks like that:
The 1984 Mac had limited functionality and its $2,500 price tag was well over the initial $1,000 objective; but its ease-of-use was light-years ahead of its competitors.
Zoom into 2006 and you get a Mac with incredible power (the latest Mac Pro is insanely fast) and vastly improved functionality, still at the top in terms of usability, clearly standing out aesthetically; the whole justifying a price premium. Windows-based PCs have partially caught up in terms of usability and become the standard commodities; but they are still a collection of ‘stuff’ bolted-on from multiple sources, that brings many frustrations, despite overall satisfaction. A necessary evil.
Here is how the two systems compare today:
The good news is that, by switching to Intel chips , Apple is getting significantly closer to the ‘standard’, in a position to offer the best of both worlds. At the end of the tunnel I entered about ten years ago, I see a light telling me that, soon, I’ll be able to scrap my Windows PC and do everything on a Mac, again. Yippee!