Many who are deeply immersed in the industry of personal technology today are barely old enough to remember the time when IBM ruled all it surveyed. Whether you liked it or not — and its competitors certainly didn’t — IBM was deeply entrenched in every large and medium-sized corporation in the world, and its reputation and its stock price were unassailable.

Although IBM responded well to the advent of the personal computer, it never quite figured out how to square the new development with its traditional mainframe business, and eventually allowed clone makers like Compaq to beat it to market with the latest technology. It has spent nearly a decade trying to regain lost ground. And now, after an especially tough couple of years of massive layoffs, first-ever losses, a badly tarnished reputation and some botched product releases, IBM hopes it is on the way to regaining a chunk of lost glory.

The strategy IBM announced on Nov. 10 has taken many of its past mistakes into account. It is significantly more open than in the past, and is far ahead of the pack in acknowledging the vital importance of networked multimedia. It claims to have an ironclad commitment to working with other vendors, standards organizations and even analog technology.


To date, much has been rumored and little said about Interactive Home Systems or IHS, also known as Microsoft chairman “Bill Gates’s other company” based in Bellevue, WA. Most press reports have centered on the company’s sketchy initial plans to acquire digital licensing rights to art from museums and collectors and transmit and display those digital images into the home.

Many things about the company have changed since then — most recently its name, to Continuum Productions Corp. But far more significantly, its work in home systems has led the company to expand greatly its scope. In an exclusive Digital Media interview, Continuum president Steve Arnold mapped out the company’s new direction in general terms and detailed some of the challenges that face Continuum. Inventing the future — especially inventing the future for Bill Gates — is a tough challenge to take on.

• I/O
Does multimedia really have mass-market appeal?

Companies claim full-screen, full-motion. Not so.

Terry Hershey takes helm; charter includes interactive TV.

Companies form WINForum to help push the petition.

Apple raises the ante with announcement of QuickTime 1.5.

WPA Film Library, Image Bank, Westlight explore multimedia.

New company will produce digital HD, VR and interactive.

Worldesign for VR; San Francisco media mecca; Double-speed CD-ROM; Microboards; Video by Bell; Government data online; HDTV test results.

CIA in New York City. [[this is missing]]


Now we remember how it ruled the world

Many people who are deeply immersed in the industry of personal technology today are barely old enough to remember the time when IBM ruled all it surveyed. Whether you liked it or not — and its competitors certainly didn’t — IBM was deeply entrenched in every large and medium-sized corporation in the world, and its reputation and its stock price were unassailable.

Although IBM responded well to the advent of the personal computer, it never quite figured out how to square the new development with its traditional mainframe business, and eventually allowed clone makers like Compaq to beat it to market with the latest technology. It has spent nearly a decade trying to regain lost ground. And now, after an especially tough couple of years including massive layoffs, first-ever losses, a badly tarnished reputation and some botched product releases, IBM hopes it is on the way to regaining a chunk of lost glory.


On Nov. 10, the company did what would have been unthinkable a decade ago: It released a detailed strategy paper laying out its directions for what it believes is the future of its business — multimedia distributed computing. The strategy is deep and comprehensive and covers the company’s existing product line from personal computers to workstations to mainframes. For the first time, it sheds more light on its relationship to Taligent and Kaleida Labs, its two joint ventures with Apple Computer.

And unlike other grand but unsuccessful IBM strategies such as SAA — the IBM-centric Systems Application Architecture that tried and failed to pull its disparate computer systems together into a common programming architecture — it claims to have an ironclad commitment to working with other vendors, standards organizations, and even analog technology. It’s the lesson of the 1990s: Proprietary doesn’t pay.

Two heady religions. “We’re stepping up to the plate,” says Michael Braun, assistant general manager of multimedia for IBM, “and telling our customers that when it comes to business, education and the government, standalone multimedia is interesting, but not very.”

What IBM seems to have discovered is a heady blend of two religions, the first being the power of networks and the second being multimedia itself. Not long after a damning U.S. Labor Department statistic was released a couple of years ago, showing that workplace productivity was actually down despite billions invested in so-called productivity tools, companies using electronic mail and other inter-office communication started piping up about the great benefits of enterprise networking.

The name doesn’t matter. Combine the benefits of networking with those of multimedia, widely acknowledged as a rich new form of communication, and the pieces snap into place. Braun believes IBM understands the applications and requirements of a distributed, heterogeneous network — i.e., connected computers of all different kinds with no central controlling processor.

Even more important for the nascent multimedia industry, IBM seems to have learned the hard way that any vendor who wants to sell multimedia into business, education and government must find a way to work within the existing, eclectic infrastructures already in place — a crazy quilt of personal computers, workstations, file servers and peripherals hooked with everything from coaxial cable to fiber to copper phone lines.


The assumption, of course, is that corporations are on some inexorable path toward needing vast stores of digital information on line. How much multimedia data in most corporations is already digital? Not much, compared to what’s still on paper or film. Text, certainly, and spreadsheets. In some companies where print collateral is regularly produced, perhaps some digital imagery.

In larger corporations, especially those which produce goods, there is probably a library of advertising videos, and perhaps some training materials on VHS tape or laserdisc. But generally speaking, the plain-vanilla corporation of today doesn’t have a lot of digital video or sound. It does, however, have boatloads of undigitized data that could be invaluable when combined with existing digital data, if only it were easily accessible.

They’re sitting up now. Braun believes that the potential benefits of delivering either analog or digital media over a corporate network in a variety of applications is making corporations sit up and take notice. His group is in charge of convincing customers that there’s a need for multimedia in the enterprise, but he says it’s selling itself.

“We have companies we’re bidding now that are developing connected multimedia applications involving literally thousands of workstations and kiosks,” he says. “Once the light bulbs go on, which usually takes a pilot application and one person inside the company that ‘gets it,’ the benefits are obvious to them.”


Now that corporations have bought into improved communication as the heart of saving money and gaining a competitive edge, networked multimedia communications is looking very good to corporate customers. Braun says they “jump on it,” especially applications such as kiosks, which actually generate revenue. IBM research says multimedia will become “the core communications vehicle” within the corporate structure.

As proof that this concept holds water, Braun offers the factoid that IBM’s multimedia business — video boards, PCs, authoring software, multimedia integration services — is up 98 percent so far this year.


IBM’s multimedia strategy methodically addresses distributed applications, a distributed systems framework with multimedia enhancements, application tools and services, data services, distributed services, communications and networking, systems management, multimedia server solutions and multimedia standards.

However, before launching into details of the strategy itself, a note of warning. This is not the first time IBM has developed a Real Big Strategy that makes all its disparate product pieces fit together and justifies the need for expensive, IBM-labeled products. The company has made some whopping mistakes in the past, including its rift with Microsoft over a shared, SAA compatible user interface between Windows and OS/2, its “home computers” such as the PCjr and, more recently, the PS/1. And then there’s the Prodigy Information Service, affectionately known in the online community as “Plodigy” for its achingly slow performance.

The strategy IBM announced on Nov. 10 has taken many of its past mistakes into account, and is trying to mitigate the damage done. It is significantly more open than in the past, and is far ahead of the pack in acknowledging the vital importance of networked multimedia.

But it is risky for IBM in a couple of ways. First, it is quite aggressive in schedule: Braun says he expects all the pieces to be in place by 1994. Second, it moves well beyond the borders of IBM’s traditional businesses and embraces a much richer range of information, including both digital and analog in the grander scheme. But even the company might argue that there is nowhere else to go; at this point, it seems to be a risk that must be taken. The world’s largest and most influential computer company is redefining itself around a world filled with rich media, and what it does in that world will certainly have its effect in the next decade and beyond.

Starting with Ultimedia. IBM announced the Ultimedia product line of personal computers in October 1991 (see Vol. 1, No. 5, p. 16), and the company is in the process of extending its workstation family to include low-cost multimedia player models, multimedia portables and RISC System/6000 Powerstations that are multimedia capable. As it does so, it is enhancing many of the key elements of distributed computing, such as interoperability and data interchange, to accommodate multimedia.

Common user interfaces. One such accommodation is IBM’s old Common User Access interface, or CUA. (Part of the SAA feature set, it’s been said that CUA caused the final rift with Microsoft.) Much like Apple’s Macintosh or Next Computer’s NextStep software development systems, the CUA provides developers with a common visual metaphor for services such as volume buttons, slide bars and other graphical data objects for controlling multimedia data streams.

In addition to CUA, IBM is also developing what it calls a “Natural User Interface,” or NUI, that will include audio and video help functions, video icons and prompts, and language query or speech recognition for help, navigation or commands. As with CUA, software developers can build applications using the NUI — thus providing familiar cross-application features that have proven such a boon to Macintosh users.

Multitasking a multimedia necessity. OS/2’s Multimedia Presentation Manager allows an OS/2 application to work directly with a multimedia device to request that it play and record, and simultaneously control and synchronize the data streams. Though common enough for standalone systems, these features pose enormous challenges when multimedia data is sent over a network, and require a multitasking operating system. The kind of network communication that guarantees audio and video designed to play together will arrive at the same time is called “isochronous,” for “at the same time.” IBM is developing specialized hardware and software that address the problems of isochronous multimedia.


As an old-line mainframe company, storing and retrieving vast amounts of data on large networks is how IBM made its reputation. Since multimedia content (i.e., video production, animation sequencing, etc.) is expensive to produce, IBM wants it considered a corporate asset to be used as much as possible across different applications throughout the enterprise.

IBM believes that standard data management technologies can maximize these assets by sharing, saving, cataloging, archiving and recovering multimedia content in the same way it has for more traditional information assets.

Some new methods required. Most standard systems such as NFS (Network File System), the de facto standard in the Unix world, are fine for doing “store and forward” transfers of blocks or files of multimedia data. However, because multimedia data is time based and stream oriented, IBM has had to investigate new methods of doing client-server communication for rich data types. The company has developed a couple of solutions. One is an $8,500 analog video server called the Ultimedia Video Delivery System (VDS) based on IBM’s AS/400 minicomputer. Another is a large-scale digital multimedia server, based on IBM’s big-iron Enterprise System/9000, called Workstation LAN File Services.

The VDS is a very interesting bridge system. It controls analog devices such as laserdisc players, TV tuners (IBM sells a board called PS/2 TV) and cable tuners hooked up to the network, using a technology called “F-Coupler” to support up to 70 simultaneous multimedia sessions on a token ring LAN with no impact on digital applications running in the same environment.

In addition to its development of digital multimedia servers — allowing the ability to add, delete or edit content on an ongoing basis that’s not possible with analog media — the company is also exploring the potential of using digital servers together with analog distribution, in order to offload heavy video processing onto the aforementioned analog bandwidth.


In a networked environment, these scenarios are fraught with land mines. There is no way to control how many people want access to anything on the network, whether it’s video or text. The video must play on each person’s desktop, regardless of its operating system. And even if each person on the network wants to access the same piece of video at half-second intervals, it’s vital that the audio and the video are isochronous.

Although IBM has not yet announced a complete solution to the problem, it says it’s using existing products, such as Workstation LAN File Services at $250 per LAN, on the ES/9000 as the basis for developing a server that can serve hundreds of multimedia data streams concurrently. It is also developing multimedia servers based on the RS/6000 Unix workstation and its PS/2 computers.

Databases and compound documents. Second, it will extend its powerful relational database technology to support multimedia. Database technology is critical to the success of multimedia in any enterprise, and it is upon this sword that many vendors will fall.

After all, it doesn’t really matter how well a networking interface is implemented if the database cannot effectively retrieve information when a user asks for it. IBM speaks of combining the best of relational database technology with the power of object-oriented databases for handling the vast amounts of data stored in a corporate multimedia network.

The company says it intends to provide certain database management products to support multimedia data types so that one database can be used to store and manage all the data on the network. This will be a particularly interesting challenge when it comes to packaging rich data into compound documents or using various media as part of a human interface, as it will be when developers implement the Natural User Interface tools of video icons and speech. The capability to respond quickly, accurately and in synch will put even the finest database tools to the test.

Storage is critical. And third, the vital function of managing the heavy storage requirements of multimedia will be managed by the system, instead of manually. In other words, the network will be able to keep track of a hierarchy of storage devices, as well as perform catalog, backup, migration, relocation and archiving functions. How serious a problem is storage? Consider the following:

• 500 pages of text = 1 MB
• 10 fax pages = 640 KB
• 10 color or detailed images = 75 MB
• 5 minutes of uncompressed audio, voice quality = 2.4 MB
• 5 minutes of CD-quality audio = 52.8 MB
• 1 minute of uncompressed, ¼ screen, animation-quality video = 147 MB
• 1 minute of ¼ screen, animation-quality video at 100:1 compression = 1.47 MB
• At 100:1 compression, a two-hour, TV-quality (VHS) movie = 2 GB

Obviously, without compression, practical storage of digital video is impossible. But even with compression, an enterprise that attempts to store and retrieve any significant amount of video will run into the need to build entire rooms for its arrays of magnetic and optical storage devices.

Certainly both compression technologies and digital storage will evolve, but until then, IBM claims its networking strategy will support both analog and digital storage devices for multimedia, including analog tape for audio and video, film, paper and optical discs.

Remote as local. Because so many different devices must be hooked into the network, IBM sees the need to support access and sharing of remote data as if the data were local. Via a system called “system-managed storage,” data files and data-management files will be stored independent of the storage device so even if a file is offline at the time someone requests it, the system can pinpoint its location.

This is a good solution for people who have significant money invested in analog storage or even CD-ROM jukeboxes, and it also allows more flexibility for the network to support many disparate clients that may be storing multimedia objects, including DOS, OS/2, Windows, Macintosh Novell Netware, AIX/6000 and SunOS machines.


The bottom line of distributed multimedia computing from IBM’s point of view is that anyone on the network should be able to video conference or collaborate in real time, or store or retrieve any kind of multimedia information at any time, from anywhere, on a standard network. The user should never know whether he or she is even operating over a network; all the storage and retrieval and system operations should be invisible.

To accomplish this in existing networks, IBM claims it will provide application support, extensions to existing networking protocols such as TCP/IP, SNA and OSI, and will provide all the subnetworking support required, including exploiting existing LAN wiring, enhancing them to support continuous data streams, supporting new multimedia networking interfaces such as ISDN, ATM and ADSL, and providing high-speed interconnections between systems that support multimedia.

Fiber and fast-packet switching will be supported, and IBM is already implementing the multimedia data stream extension of the Internet, the Experimental Internet Stream Protocol Version 2. And if you’re not sure what you have or what you want, it will provide consulting services.

In addition, though distributed computing is definitely where it’s at for the future, in the corporate setting centralized management of media resources will be very important as well. Kiosks need it to control the flow of content and updates, and to troubleshoot remote network sites. Training and educational applications will be networked and delivered “just in time.”


As is true for any large company, IBM sits on a vast number of national and international standards committees. They include all the major image compression committees, including MPEG for motion video and HyTime, which extends the SGML standards to time-based media; multimedia and hypermedia objects; compound documents; high-speed multimedia networking (ISDN, ADSL, FDDI, ATM, etc.); wireless local networks; and voice-data networks.

Its Ultimedia line supports the Interactive Multimedia Association’s practices for portability. OS/2 2.0 supports MIDI, PCM, ADPCM, CD audio, CD-ROM and CD-ROM XA. Action Media, IBM’s digital-video adapter card, supports DVI, the Intel-IBM open(ish) standard for image capture, compression and playback (see Video for Windows story, p. 19). Its Person to Person video conferencing system will be ISDN-enabled in 1993. PS/2 TV supports NTSC and PAL. And not only is IBM helping define and support the standards, but the Multimedia Presentation Manager in OS/2 2.0 will provide a conversion service between formats.

On various platforms the company supports RIFF and Bento container formats for data capture; MPEG, DVI, JPEG and JBIG compression and decompression; CD-ROM, read-once and Photo CD storage.


Supporting standards is always like rooting for the home team; you don’t have to worry about anyone not liking you for doing so. But slightly riskier is the company’s participation in joint ventures with longtime-rival Apple Computer.

Its Taligent venture, to create an object-oriented operating system for RISC systems, is on schedule and its work will contribute to the object management work that will be done in database systems for multimedia. More interesting for Digital Media readers, however, is how its Kaleida Labs joint venture fits in with its larger strategy.

Networked ScriptX is coming. Kaleida, you may recall, was founded in part to invent a universal scripting language (called ScriptX) for interactive multimedia that allows developers to write an application once to run on multiple platforms. In addition, Kaleida is creating runtime environments for titles, the Bento container format for multimedia data and a consumer operating system for handheld and other consumer-targeted devices that have never been IBM’s forte.

In addition, and of particular importance to IBM, is the networked ScriptX which Kaleida is also developing to allow users to develop distributed applications for any ScriptX-enabled platform.

“Kaleida serves a very important role, especially in the network world,” says Braun. “Most people are thinking about the standalone space right now, because that’s where multimedia lives. But think about titles on CD-ROM that play on Macs or Windows or DOS machines or wherever. Now picture me in a heterogeneous network world, wanting to do the same thing. ScriptX becomes even more important, of even more value.”

Perhaps a match with IBM tools. IBM has found a novel way to move ScriptX into its Ultimedia Tool Series. The Tool Series is an affiliated label program of independent tool makers that helps give them marketing clout in a very fragmented market. They have their own specialized marketing channel, called Media Sorcery, which has as its sole charter selling multimedia tools.

Braun says Tool Series members have to agree to import and export a certain set of formats. “Obviously what we want to do when we have ScriptX is to use the Tool Series technical council as a way to leverage ScriptX into a large number of tools,” he says. “First we got the tools working together, next we start solving the problems of customers, then ScriptX will be available; that’s the strategy. Of course, it’s premature to say that the members have agreed to use ScriptX, but we’re hoping they’ll like it and will see the benefits. In any case, we’ll have an organized forum where we can have that dialogue.”


Both IBM and Apple hope that Kaleida may finally provide the crowbar they need to pry Microsoft out of top slot in the operating-system world. Kaleida’s consumer operating system is designed to interoperate with ScriptX-developed applications and titles to allow playback on consumer platforms. If this strategy is effective in pulling high-profile consumer electronics firms into the Kaleida fold, developers will certainly follow — and Microsoft might eventually be forced to support ScriptX. (We can hear Microsoft muttering now: “In a pig’s eye… .”)

The benefit to users, of course (how easy it is to forget them), would be that they’d have far fewer annoying surprises about whether X title worked in Y consumer player or at the office. This would be vastly preferable to watching the vendors waste their time bludgeoning each other about whose incompatible standard is more worthy of developer talent and time.

Of course, nothing is ever that easy, and IBM still has a very steep road ahead to convince DOS and Windows users and developers to move to OS/2. Braun doesn’t like to address the question directly. He says, “When we go to conferences we see whizzy demos and we don’t ask what’s running those things. The response time is good. They look great. But when people go to design these applications, it hits them and they have to ask those questions.”

Is OS/2 the cornerstone? What he’s getting at, in case you couldn’t tell, was the superiority of OS/2 to Windows for multimedia. “The one thing that everyone knows is that multimedia loves storage, memory, MIPS, the more color the better. If you really want to squeeze every ounce you can out of the hardware, you need a strong system,” he says. “OS/2 ain’t the only platform, but it is the best for multimedia on an Intel-based platform because it is the most robust.”

Yes, but we must remind Braun that it makes no difference how good OS/2 is if everyone is using Windows, which today is true. If OS/2 is really the cornerstone of IBM’s strategy, it may be in trouble.

From a user’s standpoint, he says, it doesn’t matter anyway because multimedia networks, as IBM sees them, will include all kinds of computers, among them Windows and OS/2. But if Microsoft’s MPC or Windows developers decide for some reason that ScriptX isn’t good enough for them, Microsoft will still rule the roost. Microsoft is as formidable and nasty a competitor today as IBM was in its heyday, and as the software giant comes closer to shipping its new Windows NT operating system, the marketing shenanigans between the two companies are likely to escalate dramatically.


No matter what the outcome, however, it’s just good to see IBM back in the game with a sensible strategy. It never would have shaken off its inertia without the intense competition provided by companies like Microsoft, so in the long run — and barring any proof of unfair competition — its fall from grace will prove to have been a healthy thing.

IBM’s stumble is a topic for an entire book, but certainly the fact that it was crippled by its own size and power was a significant factor. The corporation’s restructuring to split the company into separate operating units was the most important first step it could have taken back to health.

Making it easier. Braun says all the independent IBM divisions are implementing multimedia across their product lines, and as they do so, he says, his job actually gets easier instead of harder. “Now all these companies have to act in their own self-interest,” he says. To help the company understand the value of multimedia internally, Braun’s group started the Market Focus Community, where executives from each of the divisions work on common issues.

“Now that they can see what the others are doing, the competitive spirit moves everything forward,” says Braun. “It’s a different kind of politics. Before, when we were telling them what they had to do, their natural human reaction was to say, ‘Don’t tell me what to do.’ Now … they move on it.”


If there is a downside to the world’s biggest computer company finally getting its act together, it’s that it all sounds too tidy. We’ve all heard the saying that when things get tough, “act as if” they’re fine and eventually they will be. And there are a lot of “ifs” in IBM’s strategy.

So, the story goes, if Braun is right and IBM’s customer base is starting to clamor for multimedia networks, and if those customers still have enough faith in IBM to invest in the company’s strategy and solutions, and if the technologies it develops for multimedia enterprise computing actually come out of the oven according to recipe and on schedule, then the company may be able to say, “We did it.” But if IBM cannot manage to turn the tide in its favor with this strategy, multimedia will be the biggest albatross a company ever tied around its own neck.

Denise Caruso

New name signals expanded scope for ‘Gates’s other company’

To date, much has been rumored and little said about Interactive Home Systems or IHS, also known as Microsoft chairman “Bill Gates’s other company” based in Bellevue, WA. Most press reports have centered on the company’s sketchy initial plans to acquire digital licensing rights to art from museums and collectors and transmit and display those digital images into the home (see Vol. 1, No. 4, p. 19).

Many things about the company have changed since then — most recently its name, to Continuum Productions Corp. But far more significantly, its work in home systems has led the company to greatly expand its scope. In an exclusive Digital Media interview, Continuum president Steve Arnold mapped out the company’s new direction in general terms and detailed some of the challenges that face Continuum.

The mission is in the name. Inventing the future —especially inventing the future for Bill Gates — is a pretty tough challenge and Arnold knows it. “We were looking for a way to describe the position of what we were doing, and we saw that the notion of ‘continuum’ really captured it,” says Arnold. “What we’re about is a continuum between old and new media.”

This particular continuum is one that nearly every media and technology company is addressing, but it’s safe to say that most are still at the beginning of the learning curve. Although the company has work in progress, Arnold made no product announcements.


The company is openly focused on building large databases of digital media and, in fact, recently announced the acquisition of rights to four large image archives (see Vol. 2, No. 5, p. 20). But in addition, it is beginning to focus on what Arnold calls “the point where content or programming in media sense intersects with programming in computer sense.” The software that Continuum will develop addresses this intersection point by improving the quality of interaction with digital information —giving people “more choices, more of what they want when they want it, and making some assumptions about things they will want when we’re able to give them interesting and meaningful things in a high-quality media environment,” according to Arnold.

“The concept is fundamentally different than a CD-ROM that does any single category really well,” he adds. “A lot of what people enjoy is to navigate across concepts and ideas, instead of getting deeper and deeper into a narrow category of information.” But, he says, if Continuum can provide a window into a “broad and vast information resource” that consumers could explore, then maybe what it would be defining is a new kind of programming that sits on top of a pool of information.

Between now and 1995, Arnold says the company will concentrate on inventing that new kind of programming — by acquiring and bringing content into the digital domain, as well as cataloging and organizing it in a software environment that’s accessible to people as an information resource.

That environment, which has been under development in Bellevue for some time, is key to Continuum’s vision. Designed specifically for interactive media, this software — variously referred to by Arnold as a “visual information system,” a “navigable information environment” or a “media environment” — will hopefully lower the barriers between people and digital information. Arnold says the company may actually publish a few titles in the nearer term, more to test concepts than for revenue.

The sophisticated software system Arnold envisions will be invented at Continuum. To date, the company claims no religion regarding hardware platforms and wants to encourage deep and fruitful collaboration with many vendors and media companies involved in the same transition as Continuum.

Pretty pictures weren’t enough. The company’s “interactive home” activities will continue in a separate division and include projects such as Gates’s much-discussed home art gallery and other areas where the design of integrated media meets environmental systems.

It was while working on the interactive home business that Continuum was initially trying to create — via moving massive amounts of high-quality visual media into the digital domain — that the team realized it made no difference how gorgeous or high-resolution the images if people didn’t get any additional value from viewing them in digital form.

That spurred the company’s first forays into “visual environment” research, but soon it realized it needed to go much further. This new environment would have to be able to sit on top of the large pools of media that Continuum was building; in order to earn its keep, it would also have to be adaptable to many different applications, from business to home to theme parks to location-based entertainment venues such as movie theaters and shopping malls.

This also meant expanding Continuum’s early emphasis on visual media into a new kind of digital studio that is capable of producing digital information in all media — from 3D simulations to sound, video, still images and text. The resulting database would be designed to be capable of easily “repackaging” itself into different combinations at the will of the user.

Knowing what to want. Arnold has rightfully tagged one of the most significant problems facing the industry as it moves toward interactive media: No one really knows the will of the user, most especially the user. Consumers don’t know what to want from interactive media any more than developers know what to give them. And, he says, to the extent that people inventing hardware haven’t known what to want, they are being sold crippled platforms that haven’t lived up to the possibility of what can be done.


Because an interactive environment will be responsive to its audience in the most fundamental way — not an interactive retrofit of existing media — the programming itself should be able to cross categories among business, entertainment, home education, etc., in such a way that is likely to obviate hard-wired platforms. This combination of a media database plus a visual interaction environment will yield what Arnold dubbed “a new kind of programming in the media sense.”

To invent this new programming, he says, is likely to require the remaking of some of today’s media libraries. “If you were to take a library of characters and apply them to CD-ROM titles,” he says, “you could do some interesting extensions of the video game paradigm that maybe would add value —better animation, better sound. That may go a long way to helping to develop this market.”

For example, a company like Disney might allow its characters to be used as part of an interface that could cut across genres such as entertainment and education.

Getting into the studio business. An integral part of the Continuum plan, then, is also to become a digital production studio, either for its own material or for analog libraries that need to be converted into the digital domain. “We probably won’t be the place to go if somebody has a large text library and just wants to put it into a database,” says Arnold. “But to the extent that the information is of interest to a general audience, you can expect our environment to be good at tagging that material, and making it accessible in a lot of different ways for a number of different applications — publishing, creating other titles and direct delivery to consumers in one way or another.”


Arnold admits that the scope of what Continuum is attempting is “staggering,” but, he adds, “it’s also the most important thing on the planet if we can pull it off. It will mean personal empowerment and democratization of media, it will open the editorial agenda, it will even allow people a channel to upload their responses to information into this domain. It is an awesomely complex task, but it is the thing that is worth doing.”

In fact, one of his main concerns right now is that the industry hasn’t set its sights high enough. “There is the risk in all of this that people will underwhelm the opportunity,” he says. “They’ll settle for a standard or a technology or a platform or a distribution medium that’s not sophisticated enough to live up to the promise.”

First Cities not enough. Arnold believes that Continuum’s commitment to transition into a digital world is much stronger than, for example, that of the First Cities group, 11 companies that have banded together to explore home multimedia markets. “The First Cities initiative is great,” he says. “There ought to be more places where media and technology and networking companies all come together, but it has to be done in a context. We want to talk about the experience that we can deliver to people, and use technology to enhance it. We don’t want to get hung up on the technology itself. We want to change the relationship between people and information.”

Toward that goal, Continuum is talking to content creators or what Arnold calls “keepers of content,” and says the company will certainly create some of its own. “We’re also willing to collaborate with anyone who’s thinking about the evolution of how to make content available to people,” he says. “We think of this as a way to bring new content resources into existence.”

Exerting some influence. The company doesn’t intend to ignore the important work that must be done with hardware vendors and distribution media providers like telephone companies and cable operators, and Arnold hopes Continuum can work with them to influence what they do.

“What we have to do, all of us, is make sure that we understand what’s possible and interesting and meaningful,” he reiterates. “The question is how to communicate a vision that’s substantive enough. If people want 30 channels of cable, maybe they’ll want 150. But they’ll need a new kind of programming, which we think will be powerful software technology coupled with high- quality content. We want to focus on that end of the continuum.”


Of course, a project of this scope plops Continuum right in the thick of all things digital media, a position where many people fear the power of someone like Bill Gates. “That’s the challenge for us,” Arnold says. “We have to be able to rise above the politics and the business issues and the personality issues. We are going to build an information resource that will be a redefinition of the support of human imagination. But in the short term, I know people will say that, one, it is too huge, beyond comprehension; and two, they’ll say it’s just Bill trying to own the world.”

Another concern is that Continuum’s ambitious long-term goals will lead people to believe the company is nothing more than a “flaky think tank that some rich guy is funding.” Although Gates’s financial power enables Continuum the luxury of aiming toward some very long-range goals around designing a new medium of communication, Arnold says the company’s goals and short-term milestones are clear.

“We want to make this a good business because if we don’t it won’t happen,” Arnold says. “But what we’re about is long-term relationships, nonexclusive rights, partnerships to evolve a new understanding of what’s possible and adding meaningful interactivity — delivered via computer technology — to the domain of media. We have to figure out a way to define partnerships so it makes sense to us and to them, and to evolve our vision slowly and carefully.”

“We aren’t Microsoft.” More than anything, Arnold knows his biggest task will be to separate Continuum and its goals from Microsoft, which has an unflattering reputation in the computer software business for being rapaciously competitive and pre-emptive.

“We aren’t Microsoft. They’re in a different business than us, and we may even compete with them in some areas,” says Arnold. “In order for us to be successful, we need to earn our own reputation. This is a little flame, and in spite of the fact that Bill has a lot of resources and a high-level commitment, we’ll be snuffed out if we aren’t perceived as leading with vision and that what we’re doing is interesting and powerful in a way that is good. I don’t know if there’s anybody in the world who would make this kind of commitment other than Bill.”

Denise Caruso


Mac companies fudge facts in ‘spec wars’

The quest for full-screen, full-motion digital video on the Macintosh rivals the search for the Holy Grail. It is a frenzied and sometimes downright nasty competition. To date, three desktop hardware companies actually claim to have captured the Grail: New Video — the only one shipping a product — SuperMac Technology and RasterOps.

Each has made claims that its product has the capability to capture, display and output 30 frames of video per second at “full screen.” This is true on the surface, since all three products can display a full screen of video at full speed.

However, for most people in the computer industry full screen means 640×480 pixels, with no cheating. None of these companies’ products — at least in their current versions — actually achieve capture at this resolution during capture and/or output.


Although computers use display screens that look an awful lot like television screens, one of the hardest things to do on a computer is to process and display full-frame, full-motion video images. The reason is that, at present, analog television and digital computers have remarkably little in common.

The desired result is to digitize, compress and decompress full-frame (640 pixels by 480 scan lines for NTSC video), full-motion (30 frames per second, or fps) video, and full CD-quality sound.

Most companies, especially those developing desktop products, use techniques such as interpolation and doubling the video scan lines to expand a lower-resolution image to fill a 640×480 window and achieve the illusion of 640×480 video.

What’s the big deal? The first problem is that in order to avoid the appearance of flicker at the 30-fps refresh rate used for NTSC (the North American broadcast television standard), TV images are “interlaced”: a TV first paints a “field” containing all the odd-numbered scan lines, then a second field containing all the even-numbered scan lines.

Computer monitors, which are no longer interlaced, have three choices for dealing with this problem.

First, they could use a memory buffer to interleave both the fields, then paint them both onto the screen at once. This is very expensive, because two memory buffers would be required: one for the frame in progress, and one for the frame right behind it. A control mechanism would have to ping-pong between the two buffers to continually refresh the monitor.

The second choice is to pick up one field of scan lines and simply paint every one of those scan lines twice, essentially throwing away half of the video information.

The third, which not many do, is to interpolate the missing scan lines, or take an average between Scan Line 1 and Scan Line 3, for example, and paint that as Scan Line 2.

Choosing samples. Those solutions deal with the interlacing problem. The other decision to make is the number of samples across each scan line. To maintain the 4:3 aspect ratio of a 480-line television image on a computer screen with square pixels requires 640 samples per scan line. This is more than twice the resolution of VHS videotape and one-and-a-half times the resolution of a laserdisc.

In the interest of reducing the amount of digital data, most vendors actually take 320 samples per scan line. They have to interpolate the missing data to blow this up to 640 pixels for a full 640×480 image display.

A true 640×480 digital television image displayed on a computer-quality monitor would be stunning — far better than anything you see on your television set. An enlarged representation of 320×240 image — especially one that has been compressed and decompressed — will not look nearly as good. However, if the technology used is really clever, it may look as good as many TV images — certainly good enough for a great many applications.

Taking 640 samples per scan line (as SuperMac appears to do) should yield a distinctly better picture. Ultimately, as the technology improves we will get “true” 640×480 digital video on computer screens, but in the meantime, purveyors of good-looking desktop video ought to be proud of themselves for the progress they’ve made to date.

Come on, come clean. New Video and RasterOps admit to doubling the scan line and interpolating the “square pixels” from interlaced NTSC video to produce full-screen video on a display device.

The only difference between these two and SuperMac, according to many sources outside of the company, is that SuperMac won’t admit it’s fudging. Steve Blank, SuperMac’s vice president of marketing, maintains that the company’s DigitalFilm board is powerful enough to sample 640×480’s worth of pixels — though others who’ve used the product say it is actually doing only 640×240. Blank refused any other comment on this article, and SuperMac did not return repeated telephone calls by press time.

In addition, SuperMac is publicly bad-mouthing RasterOps as an inferior solution. Not only is this bad karma and in bad taste, but these kinds of spitting matches are enormously confusing to the customer. It might be more productive for them all to address the real issues surrounding widespread adoption of digital video products: storage requirements, which to date are enormous, even with compression, and image quality, which in all cases is somewhat south of perfect.


Products by all three companies do an excellent job of capturing analog video, converting it to digital, compressing it for storage or transmission, then decompressing it with a minimal number of artifacts so it can be displayed on a computer screen or output to videotape or television. When you consider the technical heroics that must be accomplished to do this using a desktop computer — taking the analog standards of NTSC or PAL and actually figuring out a way to get things like scan lines and vertical blanking intervals sampled into digital format — all three companies have earned great applause.

If they would stop mudslinging long enough to look at the desktop video market, they would realize that right now not one of the three has a competitor in its chosen segment.


To be fair, New Video has not been much of a participant in the mud baths. Its EyeQ product — actually the first and only product to come to market at this point — was shipped in July without fanfare, hissing or spitting. The company is young and does not yet have the marketing and sales power of either SuperMac or RasterOps.

Its product, designed for the Macintosh in either a two-board set for digitizing and playing back video ($4,495) or with one board for playback only ($2,495), is targeted toward the multimedia presentation market, including networked video for education and training. Like the other two products, EyeQ sports the standard specs: It can capture, compress and display simultaneous video and audio. It can capture and display 30 frames of video per second and make it appear at full screen size without noticeable artifacts. It supports composite, S-Video or RGB input, as well as NTSC and PAL broadcast standards.

But EyeQ has some obviously superior design features that have snagged the interest of such companies as Apple (it was the only board Apple used for digital video demonstrations during its Worldwide Developers Conference) and other large multimedia developers.

For one, EyeQ has a DSP chip on board, which can sample 16-bit sound at CD-quality levels without requiring participation by the computer’s central processor — a real plus considering the sheer mass of audio and video files. It can also play back full-motion video even from slow data rate devices, including CD-ROM and networks, a feat none of its competitors has mastered.

In addition, EyeQ uses Intel’s vastly improved new DVI programmable video processing technology, which, in turn, provides the product with the capability to support a variety of compression schemes and hardware platforms.

A prescient move. In what turns out was a prescient move, New Video decided to buck the past trend toward JPEG compression and built Intel’s i750 video codec chip set into EyeQ. The i750 supports a variety of algorithms, including RTV (real-time video) and PLV (production-level video), JPEG for high-resolution still images and AD-PCM for audio. It can easily be reprogrammed to support new algorithms as they’re developed. Cases in point: New Video showcased a proprietary near-broadcast quality (60 fields per second) video compression algorithm during Macworld Boston this year, and Microsoft is using the DVI chip set in its Video for Windows product (see p. 19).

EyeQ also offers the allure of cross-platform playback capabilities, since it supports RTV 2.1, a cross-platform, scalable video algorithm supported under DVI. This technology enables video output from the Macintosh in the same file format supported by IBM’s ActionMedia board, which in turn allows playback of EyeQ-created video on any computer equipped with ActionMedia.

One drawback to the existing EyeQ board set for many consumers is that it does not provide support for Adobe Premiere 2.0 and Diva VideoShop, both popular with desktop video users. A free software upgrade that supports these products is on the way for those who already own the EyeQ hardware. EyeQ does support QuickTime as well as applications such as HyperCard, IMC’s Special Delivery and MacroMedia Director.


The modular approach taken by RasterOps to digital video technology makes it particularly attractive to the professional video production market.

The company has created a family of products with the expectation that professional video editors will want to customize their digital video studios. This approach allows producers to take advantage of certain components they already own, which in turn helps them configure a system that suits their particular needs.

RasterOps’s digital video is based on MoviePak, a product that is still in beta testing and not expected to be shipped until mid-November. MoviePak is a $1,999 add-on card that must be attached to one of the company’s 24-bit video display boards.

MediaTime, one of RasterOps’s two 24-bit cards that can drive a 13-inch monitor, also provides CD quality sound capture and playback. The card uses DigiDesign’s AudioMedia chip set to sample 16-bit audio. The other RasterOps configurations require the use of an audio-digitizing card from a third party.

For output to videotape or to a TV monitor, RasterOps opted for an off-board encoding/decoding solution, thus offering professionals the chance to use high-end encoders/decoders.

What’s required is either RasterOps’s Video Expander, a $699 external controller box that enables video editors to output encoded video to a VCR or TV monitor with RGB pass-through, S-Video, and/or composite video connections, or a RasterOps-compatible encoder/decoder box. Video Expander has genlocking capabilities as well.

MoviePak uses a compression chip set from LSI Logic Corp. that supports a variety of ratios from 2:1 to 160:1. When it’s first released, it will be bundled with Adobe Premiere 2.0 video editing software.


The high-end video professionals moving down to the Macintosh and the multimedia presenters market are only part of the digital video puzzle. There is yet another segment left in the market, one that SuperMac’s technology comfortably fills.

SuperMac’s DigitalFilm product is a double-card, single-slot configuration and an external box that makes it an easy choice for computer users who have begun experimenting with digital video. It is what is known as a plug-and-play solution — albeit an expensive one.

DigitalFilm, now in beta form, is expected to retail at $5,999. It’s a NuBus card set that handles simultaneous video and audio, video compression and decompression, and video encoding and decoding, in a package that only requires a single Macintosh slot. SuperMac also bundles Adobe Premiere 2.0 video editing software with the DigitalFilm board set.

DigitalFilm fits into a Mac IIfx or, preferably, into one of Apple’s Quadra series of computers. (Both RasterOps and New Video have similar minimum configuration requirements.) Although it can capture, playback and output video at the full-motion standard of 30 frames per second, DigitalFilm does not yet support true CD-quality (16-bit) audio on board, though the company says it will.

Compression and decompression is on board via C-Cube Microsystems’ JPEG hardware for still images. SuperMac calls this “motion JPEG,” though there is actually no such standard. JPEG was not designed for time-based media, either audio or video. That said, the on-screen image quality produced by SuperMac’s products is excellent, even in beta. The card can handle a compression ratio of up to 70:1.


The irony of the spec wars, where vendors fudge the facts on paper to get a “perfect 10,” is that these details ultimately don’t matter. Desktop video customers look to the personal computer as an inexpensive tool for creating compelling prototypes for clients and storyboards for film and video projects. They do not yet use desktop technologies to produce final broadcast output.

Customers, in particular the professional video editors evaluating digital video products, look for image quality. They don’t really care about the technical details of “full-motion video running at a resolution of 640×480,” as long as the quality of the video is good and the storage requirements are somewhat reasonable, at least in comparison to other solutions.

They won’t even notice. Hobbyists dabbling in desktop video aren’t likely to challenge vendor claims. Even if they notice the difference, they aren’t likely to care. The same disregard for specs holds true when you get to the presenters or business communicators. For the most part, they’re seeking something that’s easy to use and makes them look good in front of their bosses and peers. They don’t want or need to know about square pixels and doubling scan lines. But this doesn’t mean they shouldn’t be told the truth.

Desktop video is not a war, nor should it be seen as an exercise in ego gratification. It is an industry and it appears that there are plenty of customers to go around — at least there had better be, since there are many powerful and well-regarded companies like Radius that are just a few months behind in announcing these types of products themselves. It’s a good bet that customers would like to see companies stop the irrelevant competitor-bashing and start figuring out how to fix the storage and image-quality problems that are really holding back digital video from widespread adoption.

Janice Maloney

Warner reveals tie-in with interactive TV

In a surprise move only one month after nearly a dozen staff members were laid off and days after a major presentation at the CD-ROM Expo in Boston, Warner New Media’s president and CEO, Stan Cornyn, announced his retirement from the Time Warner subsidiary. Terry Hershey, director of corporate development and technology for Time Warner, has taken over the helm.

Many believe that the changing of the guard signals a more direct move toward interactive television at the Time Warner subsidiary, a move that makes sense considering its 150-channel Quantum cable system in Queens, NY. In fact, during remarks at the Boston Expo, Cornyn made the first public comments about how Time Warner might exploit its cable connections to help sell more interactive discs.


Hershey, who will retain her position at Time Warner’s corporate office in addition to the presidency of Warner New Media, says that the CD-ROM focus is too narrow anyhow.

“We aren’t in the CD-ROM business and we aren’t in the cable business and we aren’t in the floppy disk business,” Hershey says. “What we’re in is the interactive multimedia business, and that may take a variety of forms — electronic delivery as well as physical forms. From Time Warner’s point of view, which says we have all kinds of content from information to entertainment here, what we want to do is look at all the alternatives for delivery.”

She recalls the movie business, when videotapes were pegged as the death knell for the movie theater business. “But in fact, they just added to it,” she says. “Every time a new medium is developed, it increases the opportunity for delivery of our products. Choosing one or the other isn’t the relevant question. The question is, We’ve got it, what’s the best way to deliver it?”

Hershey says the future of Warner New Media is solid, and that rumors of the company’s imminent demise are untrue, but declined to comment further on future directions. “This absolutely is not the end for Warner New Media,” she said. “But our philosophy is not to talk until we have something to say.”


During his tenure at Warner New Media, Cornyn often talked about interactive multimedia for television. At the first Digital World conference in 1990, he demonstrated a prototype called The Whole Megillah, an interactive system that he claimed could be implemented either in the CD-ROM/player format or broadcast over cable.

However, insiders say very little work was actually done on interactive TV systems in Warner New Media because, as one said, Cornyn didn’t like the way CD-ROM titles looked on a TV screen.

Walt Klappert, head of technology for Warner New Media, says interactive TV designs will “definitely be part of what’s on our plate, more so than in the past.” Though he says interactive TV could be an exciting application area, he doesn’t believe that it will be a substitute for optical disc technology.

Flipping a popular notion. In fact, Klappert echoed Cornyn’s remarks at the Boston Expo. This newly articulated Warner New Media strategy flips by 180 degrees the popular notion that CD-ROM products will prepare the public for interactive TV.

“Until really the mass market consumer has a chance to see why they want interactive multimedia, they aren’t going to buy it. [Interactive TV] could be to CD-ROM what radio is to the record industry. Cable will be an opportunity for people to experience interactive multimedia, but if they want to collect it and guarantee that they’ll have it for all time, I think they’ll go to the equivalent of a record store and buy it [and] put it on their shelf,” says Klappert.

But he is not talking about today’s CD-ROM technology, which has certainly proven to be less than thrilling in terms of performance. “I’m thinking, for example, of double-speed, quadruple-density CD-ROMs,” he says. “Then you’re starting to get bandwidth that will give you pretty good quality video and since it can hold two hours of video it can be a movie carrier. This would be quite good for multimedia too.”


Of course, for this to happen interactive titles will have to be far more athletic than they are today. Not many CD-ROMs inspire the kind of constant reuse — or at least potential for reuse — that defines the commerce of music or literature, and that is the challenge. In fact, this was a point that Cornyn himself made rather strenuously at the recent CD-ROM Expo in Boston.

Cornyn demonstrated the company’s latest interactive TV project with Time, and said that with two-way fiber optic cable (à la the Queens project), what he called “Interactive, Fast-Forward TV” would allow viewers to skip through TV shows as quickly as they do a magazine. Home copies would be as easy as pushing the “buy” button on the remote.

He claimed there is a legitimate reason for both markets. “We may simply have developed them in the wrong sequence,” he said during his talk. “I know we in CD-ROM are trying to make movies, but it feels like we’re trying to make them out of flip books, when videotape and celluloid are available. We know people will buy our multimedia discs, once people are aware of our virtues. CD-ROM needs that kind of exposure.”


By all accounts Time Warner is paying close attention to virtually everything that’s going on in multimedia, but continues to play its cards close to the chest. No one is quite sure how the rumored deal with IBM for a digital production system is going, nor do we know exactly what the company has planned for its Quantum 150-channel cable system in New York.

But one thing is certain: Terry Hershey will be at the center of whatever action comes down the pike. If indeed she is serious about Cornyn’s departure as “absolutely” not the end of the line for Warner New Media, then it’s probably safe to assume that the company’s multimedia focus is about to get a lot wider.

Denise Caruso

40 companies form WINForum to help push the petition

Apple Computer celebrated the culmination of a two-year effort recently when the Federal Communication Commission officially proposed that a portion of very scarce radio spectrum in the U.S. — today occupied by public utilities and railroad communications — be allocated to emerging technology services. As part of this new Emerging Technology band, it was proposed that 20 MHz be allocated specifically for “user provided,” unlicensed personal communications services, or PCS, technologies.

Although the process was put in motion by Apple’s Advanced Technology Group (ATG) in August, 1990, it quickly became obvious that there was much to be gained by joining forces with other companies. What started as informal consultations between a small number of companies interested in gaining access to radio spectrum, became a formal, 40-member organization called the Wireless Information Networks Forum, or WINForum.

A high stakes game. The stakes in wireless networking are huge. Almost every computer and communications entity is working on portable devices. According to Apple’s Dick Allen, manager of communications technology for ATG, portable computers will make up half of all of the computers sold by 1995.

In addition, cellular telephone sales have increased an amazing 1,600 percent during a six-year period, to eight million users in the U.S. alone. Obviously, Apple’s Newton technology and product line (and its success or failure) will be extremely dependent upon its communications capabilities.

“The promise [of portable computers, personal communications and personal digital assistants] cannot be realized if you have to connect to wires everywhere you go,” said Benn Kobb, president of WINForum, based in Washington, DC.

There are a number of different personal communications services being analyzed by the FCC, including both licensed and nonlicensed wireless systems. Each is being considered separately.

Licensed or carrier-provided services will be operated by a communications entity like a cellular phone system, and will work over large geographic areas. Like any telephone network, the user is charged for using the system. Although still a form of PCS and still in the Emerging Technology band, the difference is that the party who’s being paid for access to the network will hold a rare license from the FCC to charge for these services.


Licensed or no, all PCS vendors are embroiled in the politics of radio spectrum allocation. To make room in the Emerging Technology spectrum, the FCC is moving out railroad and utility companies, which aren’t very happy about it.

Though carrier PCS may be able to share the frequencies with utility and railroad microwave systems, at least for a while, Kobb says that unlicensed PCS requires clear spectrum. “We’re talking about millions of user PCS devices of many kinds,” says Kobb. “We can’t risk interfering with railroad and utility communications.”

The FCC is watching to see how the transition is handled before it actually finalizes the Emerging Technology allocation, so WINForum participants are moving gingerly to help the incumbents move out of the Emerging Technology band. “The bottom line,” says Kobb, “is that the existing users should not suffer hardships and their needs should continue to be served, even though there are new users in the spectrum.”

Wireless LAN extension. User supported or nonlicensed personal communications networks are envisioned as an extension to a university or office wired computer network; there is no fee for logging on to the system and, with the proper identification, the user can communicate with other users or access and download data over that network. The user will not be tied down to a specific node on the network, but can log on from anywhere on the campus or office building — a classroom, a warehouse, or even a lawn. An office meeting might take place with coworkers at a table exchanging documents between portable computers without a single wire attaching them. An intelligent network could even “recognize” the user as soon as his or her device is within detection range.

The range for wireless LAN networks is quite small compared to cellular telephone networks, or even licensed PCS. User-supported wireless networks make trade-offs that optimize such a system for campus or single building-size applications. For one thing, each cell or reception area is only about 50 meters in diameter. However, it can support extremely fast data transfer rates: up to Ethernet speeds of 10 megabits per second. By comparison, most advanced phone services offer transfer rates of only 32 kilobits per second.

In a nonlicensed network, the user or network operator either provides or recommends the hardware for the system, and access would be limited by password to a single company or organization. Thus, unlike a public network, only those devices that match the specifications of a particular network would be able to connect. The wireless LAN can, however, connect to a public network, like the phone system, to allow dial-up access to other services.

Apple’s Data-PCS. Apple developed its Data-PCS proposal for submittal to the FCC in 1991. Data-PCS primarily will handle “bursty” data, or chunks of information that do not clog up network lines or airwaves for long periods of time. However, as in a traditional wired network, Data-PCS will ultimately be able to deliver streaming data, like music or video, for media-intensive applications.

“We simply asked the FCC to give us [WINForum] the spectrum with rules that were flexible enough that the technology wouldn’t be inhibited,” says Allen. Those rules included limits on transmission power and transmission time (i.e., the aforementioned “bursty” data), as well as one that requires a device to “listen” to the network before transmitting to make sure a frequency was unoccupied. The primary concern, according to Allen, was that early spectrum users don’t hog the network in any way.

In these situations, the usual procedure is that if the FCC grants the request for spectrum, then it asks the industry to propose more specific guidelines and rules governing the use of the spectrum. WINForum’s technical committee is now in the process of doing just that.

Apple’s internal efforts to develop communications technologies for personal networks are distinct from its work with WINForum, which was designed primarily to focus the FCC lobbying efforts of the participating companies. The WINForum charter, according to Kobb, is to “promote the right business, legislative and regulatory environment for nonlicensed, user-provided wireless voice and data communications.”

Sticky issues. There is still plenty of work to be done. Just because the FCC allocated the spectrum to user-supported PCS doesn’t mean that these networks will be in place tomorrow. In fact, the FCC will not apply the allocation until many sticky issues are resolved, such as what will happen to the aforementioned organizations that now use that spectrum. They are loathe to give up what they have. And while the new PCS allocation will force them to relocate, their demands for new bandwidth must also be accommodated.

The actual allocation of spectrum probably won’t happen until some time in 1993 at the earliest. These are private networks, however, meaning the speed with which services can be put into place is relatively quick, because the FCC and the user of the spectrum do not have to worry about local common-carrier regulations and monopolies.

In addition to finding a new home for the spectrum’s occupants, the FCC will be determining how the 20 MHz is divided. Differences of opinion remain about how to resolve the issue of “channelization,” or how to split the spectrum. Some are advocating four 5-MHz channels, while others suggest that since voice communications don’t need that much bandwidth, the 20-MHz spectrum should be divided up among high, medium- and low-speed services.

A who’s who. There are 40 WINForum participants, and while not all actively worked on the FCC petition process, they added weight and direction to WINForum’s actions. Formal members include Apple and Hewlett-Packard (both cofounders), AT&T/NCR, Bell Communications Research, Cabletron Systems, Digital Equipment Corp., Ericsson Business Communications, Farallon Computing, Tandy/Grid, IBM, Intel, Microsoft, Motorola, National Semiconductor, Rockwell International, Rolm, Sun Microsystems and Tandem. “It’s remarkable what this industry can do when it gets together,” says David Nagel, head of Apple ATG.

The WINForum participants were disappointed, however, in the amount of spectrum that’s been proposed by the FCC. So many different kinds of devices, applications and organizations want to make use of user-supported PCS services that Kobb doubts that 20 MHz will be sufficient.

All WINForum participants claim to be developing either personal devices or communication technology for personal digital assistants. Many of them are already working together on products and services. In addition, the Europeans have allocated far more spectrum than the FCC to similar services. But the wording in the FCC decision, which left open the possibility that more bandwidth would be allocated at a later date, has so far alleviated some of those fears.

David Baron, Denise Caruso

Apple betters its QuickTime, too

After many delays, Microsoft finally announced its technology for dealing with time-dependent media and digital video for the Windows system. Video for Windows (formerly known as AVI, or audio video interface) provides a scalable video capture and playback architecture for PCs running Microsoft’s Windows 3.1 operating system, which includes the multimedia system tools.

Not waiting for Microsoft to steal its thunder, Apple announced a QuickTime upgrade two weeks before Microsoft could take the stage.

In its basic configuration, Video for Windows will allow users to play back digital video sequences in small (320×240 pixels) windows at up to 15 frames per second. The software requires at least a ‘386 processor running at 16 MHz, with a color VGA monitor and a Multimedia PC audio board for those who want their video with sound. These are the same basic requirements for the MPC standard, so people equipped with at least the lowest-end MPC will be able to play video sequences on their machines.

For capturing video, Microsoft is recommending at least a 33-MHz ‘386, 4 MB of memory and a third-party video capture or digitizing board such as the Video Blaster, Targa or Intel i750-based boards.

The software package, which will sell for $199, includes a number of complimentary applications and system tools, as well as a clip library of 250 files for unlimited use without requiring licenses. One is Media Player 2.0, an upgrade of software originally included in the Multimedia Windows system.

The new Media Player is now an “OLE server,” OLE being Microsoft’s object linking and embedding architecture that allows applications to import software objects of any type without further modification to the program. As a result, any OLE application can be embedded with video. A Media Control Interface video driver is also included, which is an implementation of the DV-MCI command set that Intel, Microsoft and other companies hammered out last spring.

There are also two applications for capture and editing: VidCap and VidEdit. Within VidCap, the user digitizes a video segment, choosing the parameters under which he or she wishes to encode the data, such as the frame size, bit depth and audio sampling rate. The clip is stored uncompressed in a predefined “capture file,” which resides on the hard disk.

Most of the processing takes place under VidEdit. It is here that the final start and end points of the clip are selected. In addition, other media elements can be added to the sequence, segments can be added or removed, and audio can be resynchronized to the video (in case it has lost synch during the editing process). Once the user is happy with all of the elements of the sequence, it can be compressed and saved in the Microsoft AVI format.


Like QuickTime, Video for Windows can incorporate any third-party’s video compression algorithms. It comes with three software compression encoder/decoder algorithms, or codecs, built-in. Microsoft RLE (run-length encoding) was designed specifically for animations or other synthetic images. Microsoft Video 1 works better for video images, and is better able to handle deep color or fast motion. Both can be set for target playback platforms; i.e., you can set the maximum data transfer rate for playback at 150K per second if the clip will eventually play off of a CD-ROM.

RTV, now Indeo. The third algorithm comes from Intel. Called Indeo, this is a high-end codec designed to work in conjunction with the IBM/Intel ActionMedia II boards. Indeo was known as Real-Time Video, in its past Digital Video Interactive (DVI) incarnation.

Using the i750 chip on IBM’s ActionMedia board, Indeo provides the highest quality playback available. Using the transparent scaling capabilities of Video for Windows, Indeo clips can play back on any capable system — but without the i750, only in a 160×120 window, and depending on what hardware is present in the system, at 15 frames per second. However, if hardware assistance is available, these clips can be played back at full screen and 30 frames per second. (Intel accomplishes full-frame video by interpolating the pixels from half-frame video.)

When equipped with a ‘486 PC and the ActionMedia board, it is also possible to encode and compress in one step, as opposed to the two-step process mentioned above.

Plenty of rebirthing. The partnership with Intel serves both companies well. Microsoft is able to offer a tightly coupled hardware-assist option for less than $500, and Intel may finally have found a niche for its DVI technology that has undergone more rebirths than birthdays during the past four years.

Digital video sequences produced under Video for Windows can be included in any Windows application supporting OLE. For example, a Media Player file could be inserted as a graphic object within any word processing document.


Video for Windows and QuickTime 1.0 are functionally equivalent. Both are able to process digital video in small windows at reduced frame rates; both are scalable solutions that can determine the best way to play back a clip in relation to the platform it is playing on. And both are modular, allowing third parties to supply both compression codecs and hardware assistance for a variety of different application solutions.

Unfortunately for Microsoft, Apple announced QuickTime 1.5 about two weeks before the Video for Windows announcement. The new QuickTime raises the stakes in the area of software-only video playback. Additional compression algorithms included in QuickTime 1.5 enable video playback at half-screen (320×240 pixels) at 15 frames per second, or quarter-screen at 30 fps. In addition, Apple has included the Photo CD access software and decoders and the new version of its CD-ROM software, which will be shipped with its new double speed CD-ROM drives (see separate story, p. 24).

Apple’s new software expands beyond audio and video by enabling developers to create or incorporate additional data types with a generic media handler. This could take the form of an additional track that accompanies a video, for example. SuperMac, New Video and RasterOps are all providing hardware-assisted solutions that provide full-motion digital video, at much higher prices, however, than the Microsoft-Intel solution (see story, p. 13).

Most importantly, Apple and its developers have a 16-month lead in the digital video market. QuickTime is still ahead of Microsoft’s offering, and the tools that have been developed around QuickTime are more sophisticated than anything that is yet available for Windows, but that will probably not be the case for long.

Rumors of cooperation and cross compatibility have been slightly exaggerated, but all is not lost. Apple developed a Windows player for QuickTime videos, and Microsoft has included a QuickTime movie converter for Video for Windows. This is still a platform war, in which the subjective opinion of the user will make the final determination. One technology or platform will not force the other out of the market — at least not at this point.

David Baron

The benefits, hopefully, include untapped markets

Sniffing a new revenue stream in the making, several of the nation’s largest stock photography and film libraries are releasing their collections, or parts of them, in digital formats for use in multimedia.

WPA Film Library, Image Bank and Westlight are three stock companies exploring the world of digital images and multimedia as a potential new revenue stream.

More than 10,000 hours of film. WPA Film Library, based in Alsip, IL, has perhaps shown the greatest commitment to new technologies by releasing its entire collection for use in multimedia. As a division of MPI Home Video — the world’s oldest home video company — it owns or represents the copyright to more than 10,000 hours of film from 1895 to the present.

The diverse collection covers cultural documentaries, television shows, travel, science and educational films. It has exclusive rights to historically significant footage, such as the British Pathé News Collection (1896–1970) and Martin Luther King Jr.’s “I Have a Dream” speech from 1963.

Licensing will be negotiated on a case-by-case basis. In most cases, the company says rights to a clip or film will be granted in perpetuity for each individual project.

New line of business. Multimedia is a new line of business for WPA, one it approaches with some trepidation. “We need to educate ourselves to how big the market is,” says Lou Zucaro, director of multimedia operations. “I’m not sure we have expectations as far as the market goes.”

The company’s decision to test the waters came from a practical decision to digitize its film stocks for the sake of preservation and to make perfect reproductions for its clients. “This way we didn’t have to go back to the film for copies, and there’d be no loss of quality,” says Zucaro.

Zucaro believes most of WPA’s multimedia business will come from its existing client base of advertising agencies, independent film makers, corporations, museums, universities and others. Computer companies including Commodore, Digital F/X and C-Cube Microsystems were among WPA’s first multimedia clients.

A Multimedia Construction Kit containing the WPA Film Library catalog in both book and CD-ROM format along with three other CD-ROMs with stills, clips and sound effects will be sent out to interested parties and the company’s client base.


In a similar development, Dallas-based Image Bank, the world’s largest stock photography agency with an excess of 20 million photographs, and Los Angeles-based Westlight, the world’s fifth largest stock photography agency, are testing the digital media waters as well by placing samplings of their collections onto Kodak’s Photo CD Catalog discs.

Photo CD Catalog enables users to search electronically through vast databases of low-resolution images by key words. Also, Photo CD images can be displayed on a television or computer screen for group viewing. (For a detailed look at Photo CD and Kodak’s strategy, see Vol. 2, No. 4, p. 5.)

Image Bank has released an initial collection of 2,500 photographs and illustrations on its first Photo CD Catalog discs. The catalog is intended for graphical designers, art directors, desktop publishers and multimedia producers.

The Photo CD Catalog represents a new advertising channel for the company. “In as much as it facilitates the search through our catalogs, I expect it will expand our business,” says Knud Smal, VP of international marketing at Image Bank.

Westlight has released 3,000 images onto Photo CD Catalog for similar use.

Unlike Image Bank, Westlight is diving head first into Photo CD. Westlight has leased a commercial Photo CD workstation from Kodak and is investing significant amounts of time and resources toward the development of digital photography.

Traditional business will dry up. Craig Aurness, Westlight’s founder and president, believes that demand for traditional stock will be flat while electronic images take off. Digital photography has several practical advantages over print photography, particularly in electronic search, distribution and manipulation capabilities. “It’s not hard to watch my twelve-year-old at home with an IBM to realize where the future of information distribution is going,” he says.

For two years, Westlight has been gathering high-resolution photography specially suited for screen display. Screens present information, such as scan lines, color and contrast ranges, differently from print formats and require special photographic techniques for high-quality output.

Multimedia producers are first. Westlight’s first few clients are multimedia producers. So far, marketing of the digital images has consisted of letter mailers to its client base of art directors, designers, publishers and multimedia producers making available Photo CD for a minimal cost.

A division of 10 full-time employees are working with the Photo CD technology. Not all of Westlight’s two million images will go onto the format, because demand simply isn’t there, according to Aurness. He predicts several hundred thousand electronic images will satisfy the market for now.

Westlight is also working with Kodak to develop Kodak’s Picture Exchange. Picture Exchange, to go online in 1993, will be a national data network for online searches through a database of millions of photographs.

Stock film and photography centers have entered the digital age. Some companies, such as Westlight, are eagerly greeting new technologies. However, the less enthusiastic attitudes of companies such as Image Bank and WPA are more likely to be mainstream until they can be convinced of the benefits. As Smal says: “This is an initial test period. Nothing is set in cement. If it adds anything to our business, we will expand.”

Amy Johns

New company focuses on entertainment, education, fine art

Magic Box Productions, founded less than a year ago by a former NHK Broadcasting wunderkind, plans to create the context for some of today’s hottest buzzwords: digital high-definition television, virtual reality and interactive media.

Hirofumi Ito, now 38, produced and directed hundreds of programs for NHK, Japan’s public television network. More recently, he founded HD/CG New York, the first high-definition computer graphics production facility and an affiliate of NHK. (HD/CG’s first production, “Lost Animals,” recreated the form and movement of extinct animals and has won 12 international awards.)

Ito founded Magic Box in December 1991. Located in Beverly Hills, CA, the company defines itself as “a team of producers, artists and engineers … creating the next generation of digital media.” It is both a think tank and a production facility, with members of the company either developing new technologies on their own or fostering the development of them through alliances with other companies. It is funded by a single anonymous Japanese investor, who believes Magic Box can change the face of entertainment.


The company’s proximity to Hollywood is intentional, since the U.S. movie-making industry is one of its primary foci. And by all accounts the interest is reciprocal: Magic Box had visitors from Hollywood’s film community before its doors were even officially opened.

Much of the film work at Magic Box is still under wraps or in negotiation. But Sally Rosenthal, head of the interactive technologies division at Magic Box, tips her hand a bit. “I am bored with the way technology is used in films today,” she says. “Like Hirofumi, I prefer B-movies to mainstream movies. We all like bad films a lot. In fact, it’s fair to say that we are all enchanted with the idea of making bad films as well as special effects for mainstream films.”

Live action plus 3D. The first film that will list Magic Box in its credits, however, is as far from a bad drive-in movie as you can get. Ito was recently named director of a feature-length film about a Japanese official, known only by the name Sugihara, who during World War II saved the lives of thousands of Lithuanian Jews by issuing them visas against the orders of the Japanese government, enabling them to emigrate from Lithuania through Japan to their final destinations.

The film’s live action footage, which will be produced in the United States, will be shot in high definition with 100 percent computer-generated 3D backgrounds. It will be the first film of its kind in the world. Magic Box recently purchased a Sony digital high-definition recording facility, one of the few such systems in the world.

To celebrate and demystify. “The reason to be interested in Magic Box,” says Rosenthal, “is because the people there have a new attitude toward computers and technology. The attitude is to simultaneously demystify technology and celebrate the magic of it.”

(In at least partially keeping with that philosophy, the Magic Box logo was designed by Susan Kare of the original Macintosh team, who’s designed charming and friendly iconography for nearly every graphical user interface in the computing world, including Microsoft Windows and PenPoint from Go.)

In addition to Rosenthal and Ito, there are two other core members of the Magic Box team. Momoko Ito is an international negotiator who specializes in technical and media entities. Jean Kim, a computer graphics and HDTV expert, once worked at Captain America, the company that helped produce one of NHK’s first live satellite high-definition productions. Now she is working on a five-minute demo for NHK based on the Chinese fairy tale Magic Monkey.

The company is small and intends to stay that way. “Our company is different from the existing computer graphics production companies,” Hirofumi Ito said in a recent interview with the Japanese edition of Pixel magazine. “We are not the type of company that has rows and rows of workstations. We are a company that generates ideas and plans.”

Hirofumi Ito says Magic Box will hire computer graphic production companies to help with its projects, which extend well beyond the film industry. The company is also involved in developing advanced computer graphics technology, such as its V-Clay 3D modeling software.


V-Clay is based on “metaball” technology (by the way, spell-checking software insists on changing it to “meatball”) originally developed by the legendary Jim Blinn, now of CalTech, formerly from the Jet Propulsion Lab, who’s well known for his humanistic approach to complex computer graphics problems. Metaballs provide a more organic look to 3D models than traditional polygon-based software. V-Clay has recently been licensed to SoftImage, which will distribute the product for use on Unix-based workstations and the Silicon Graphics Indigo computer.

Magic Box is also involved in producing large audience-participation events including location-based theme parks as well as sporting and game events — similar to the 5,000-participant interactive Pong game showcased at the Siggraph 1991 show that Rosenthal produced (see Vol. 2, No. 2, p. 30).

More fun to do. To that end, Magic Box recently acquired worldwide marketing rights to the Cinematrix Audience Participation system, the same system used for the Pong game.

Developed and patented by Loren Carpenter of Pixar, the technology is a mechanism by which people use reflected light to give instructions en masse to a computer. Though much more fun to do than to describe, Cinematrix is a breakthrough technology for creating the kind of audience-participation experiences that Magic Box has planned.

Right now most of what’s going on at Magic Box involves traditional tools, such as ink on paper: much contract negotiation is under way. This is a company you’re likely to see much more of, both on the big screen and behind the scenes.

Janice Maloney


Last month’s piece on electronic publishing (Turning Up the Heat: A Media Wildfire) failed to note that Sony announced more than 70 titles at the same time it launched its MMCD player in New York. Titles shown at the event included the Wall Street Journal Guide to North America published by Random House and the Official Airline Guides Travel Disc published by Sony Electronic Publishing. Other publishers working with the MMCD player include Compton’s New Media — by far the most prolific, with titles ranging from Let’s Go Europe to New Dynamics of Winning — to IBM Corp.



Worldesign, Inc., the nation’s first information design studio using virtual reality (VR) technology for industrial and commercial applications, has announced its first consulting agreement with Evans & Sutherland, Inc., Simulation Division, and a handful of other design and consulting contracts under discussion.

Worldesign, a spin-off of the Human Interface Laboratory (HIT Lab) at the University of Washington, was founded some seven months ago to pioneer a business in the emerging, yet still sparsely populated field of commercial virtual reality. (For more on HIT Lab, see Vol. 1, No. 6, p. 19.)

Evans & Sutherland, veteran manufacturer of high-end computer graphics engines for simulators, has hired Worldesign to study the application of VR to its industrial and commercial business. The company’s other potential contracts include Osaka Gas in Japan and Electric Power Research Institute in Palo Alto, CA, as well as an unnamed European auto manufacturer and a large financial reporting service.

The Seattle, WA-based company’s information designs include projects such as developing an interface between operators and complex machinery and showing how the next generation of utilities might function in Japan.

Robert Jacobson, founder and president of Worldesign, as well as a cofounder of HIT Lab, knows the company is entering uncharted waters with its concept of information design. “There’s a lot of education involved,” he says. “People have taken for granted the media environments in which they live. How can you rework an environment?”

Worldesign has a nonhierarchical work structure based on a medieval crafts guild model, and is trying to work out a licensing approach that allows all participants to maintain some financial connection to their projects. Worldesign favors a collaborative approach to design and technology and says it will pool the best available resources for projects. Several of the 10 staff members have backgrounds in cultural or anthropological studies, and four are HIT Lab graduates.


Business leaders from many of San Francisco Bay Area’s multimedia companies and leaders in City Hall are pushing a resolution to establish the city as an international multimedia mecca.

The effort, which started more than a year ago, will culminate in a vote by the San Francisco Board of Supervisors on Dec. 8. It is motivated by a desire to foster growth of the new multimedia industry in San Francisco.

The resolution, which takes the form of a statement of intent by the city, calls for zoning laws to allow for high-bandwidth telecommunication lines and other infrastructure improvements. Also, the city would provide support for an interactive multimedia center equipped with a library of interactive titles, hardware and software, and conference areas. In addition, the industry would receive funds from the San Francisco Hotel Tax Fund to sponsor an annual Interactive Media Festival.

“San Francisco can become the Hollywood of this industry,” says San Francisco city supervisor Jim Gonzalez, also a member of the steering committee. He predicts multimedia will be a $200 million business in San Francisco by 1996 (though by whose estimates he doesn’t say).

San Francisco has significant media- and technology-related resources at its disposal. The city is home to more than 38 multimedia companies. Nearly half of San Francisco Bay Area workers are already trained or work in technology- or information-based positions, according to the development group. The city additionally benefits from a growing technology base in Silicon Valley and other regions of the Bay Area.

An ad hoc development group, comprising 18 individuals from Bay Area multimedia companies and related fields, is cultivating the effort. Thus far, “there are as many people in the room as there are agendas,” says one committee member.

The resolution will become more defined over time. However, the group aims to remain open, inclusive and representative of diverse interests and perspectives of San Francisco’s growing multimedia community.


The slow access times for CDs have been an industry annoyance since people started putting anything other than audio on a CD-ROM. The compact disc was designed to handle linear audio tracks, and anything that required seeking a specific spot on the disc took what seemed like forever. Short of changing the size of the indentions on the disc, which would require a laser with a finer beam, the only way to increase the access speed of these drives was to accelerate the speed at which the disc spins.

And now, just as CD-ROM XA discs have started to become adopted as an industry standard, companies are starting to release double-speed CD-ROM drives.

NEC Technologies started shipping its double speed CD-ROM drive for the Macintosh and PC in January. In March, Sony announced it would release a PC competitor in 1992. And, as of last month, Apple has entered the fray with an enhanced Macintosh version of the Sony double-speed drive due out in December.

The double-speed drives, as you might expect, are able to pull data off the CD at 300K per second, compared to the 150K per second of most standard CD-ROM drives. Average access time, or the amount of time it takes for information to be located on a disc, is 280 milliseconds for the NEC drive and 295ms for the Apple drive, compared to today’s norm of 600ms. Depending on the computer such drives are attached to, users can expect to see substantial differences in speed, especially with titles that primarily store textual information.

XA compatibility? Neither drive is XA standard, which requires additional hardware or software to enable the reading of compressed audio. Apple’s double-speed drive is already XA-compatible, while NEC claims it can easily upgrade its new drives to XA when it decides to do so. The Apple drive reads multisession Photo CD discs, as well as CD+G and CD+MIDI discs (although this capability is of rather questionable broad market value).

Both drives can switch speeds between normal and 2× speeds. Apple’s drive also has a 256K buffer, versus NEC’s lesser 64K buffer, which also enhances response time. Of course, spinning the disc at twice the regular speed means anything that is time-dependent, such as video or audio, is not likely to run properly. Chandran Cheriyan, CD-ROM product manager at Apple, said that “maybe, in some cases” there would be compatibility problems.

Apple deals with this problem in software by allowing a manual switch to standard CD-ROM speed. NEC’s drive will only slow for CD audio tracks.

The Apple drive, bundled with up to 10 CD-ROM titles, has a suggested retail price of $599 for the external model, and $499 for the internal model expected to be shipped in January 1993. The drives will also be shipped with the Performa 600, the Vi and the Vx computers. NEC’s bundle of six CD-ROM titles, called Multimedia Gallery, retails for $999. A standalone drive is $749 and an internal drive $649.


Microboards, Inc., the Japanese supplier of the first CD-I authoring system for Philips, has introduced a 144disc-capacity CD-ROM minichanger, priced aggressively at $14,950.

The system, called the Libreeze 604X minichanger, accesses 36 discs online and 108 discs “near-line.” The online discs are stored in six Pioneer DRM 604X disc drives for immediate access.

“Near-line” discs are not ready to play, but stack inside the Libreeze case readily accessible for manual loading. When a near-line disc is requested an LED panel in the case sets off a light indicating the location of the disc.

The Libreeze minichanger has a data transfer rate of 612K/second, thanks to Pioneer’s Quadraspin technology, which spins the disc at four times normal speed. It also boasts an average access time of 300 milliseconds, a figure comparable to the market’s fastest drives.

Due to these high speeds and access times, Microboards claims the system can function as a file server in networked situations. The first few systems came off the production line in October.

“We envision these going into the network server area,” says Craig Hanson, U.S. general manager in Microboard’s U.S. office in Carver, MN. “There’s interest among governmental agencies, Photo CD users where they have a lot of images to work with, and anywhere where large databases of text, images and sound are used.”

The system is compatible with DOS, Windows, Macintosh, OS/2, VMS, Sun OS, Solaris, Unix and Silicon Graphics and conforms to many CD-ROM standards. Although Philips’ CD-I was once MBI’s bread and butter — it chose not to make this model CD-I compatible. (For more on Microboards, see Vol. 1, No. 11, p. 14.)

Although useful for existing CD-ROM titles, Microboards did not make the disc changer CD-ROM XA capable — a decision that seems shortsighted in light of heavy interest in the XA format. It has, however, somehow managed to be Photo CD compatible. The company says it’s working on XA compatibility with an unnamed third party and if demand is there, it will upgrade the system.


Bell Atlantic Corp. is beating some of the large cable companies at their own game. The Bell operating company recently announced plans to develop a “video-on-demand” system that will enable TV viewers at home to dial up movies or television programs of their choice, at any time, over ordinary telephone lines.

While both the cable and telephone companies have been gearing up to compete in this potentially rich market, many of the contenders have been stymied by the technical limitations of existing telephone wiring, which in the past has been considered incapable of transmitting the huge amounts of digital data required for commercial video-on-demand systems.

Bell Atlantic has found a way around that. Using technology developed by Bellcore, the research arm for the seven regional Bell operating companies, Bell Atlantic will offer viewers at home a single channel of high-quality video that will run over ordinary copper telephone lines. Viewers will be offered a variety of choices from which they can select one at a time. They will not be able to watch live television on the system in its first form.

A test system involving 400 Bell Atlantic employees — providing telephone services in New Jersey as well as other mid-Atlantic states and Washington, DC — is slated to be in place sometime next summer. Commercial introduction of the Bell Atlantic system is expected in 1994.


Online and open information advocates gave a heavy sigh last year when legislation to increase online access to U.S. government information passed in the House of Representatives, but at the last minute failed to go through in the Senate. The successor to the H.R. 5983 bill, known as “Gateway to Government” in the Senate and “Wide Information Network for Data Online” (WINDO) in the House, promises to be a hot item in Washington in 1993.

The bill would provide a single phone number for online access to public federal information and charge incremental fees. More than 1,400 federal depository libraries would receive the information at no charge. Demographic statistics, federal court cases, Securities and Exchange Commission (SEC) disclosure statements, and White House and State Department press releases are among the federal databases that would be accessible through the system.

Public government information is available through hundreds of disparate online sources ranging from private companies and value-added online services to government-sponsored bulletin boards.

James Love, director of the Taxpayer Assets Project of the Center for the Study of Responsive Law based in Washington, DC, says the problem with the current distribution system is that it often makes users pay for information twice — once in taxes and again in inflated online fees. In other cases, online information is not available or is difficult to find. “Our argument is if taxpayers pay for the information, and they’re willing to pay for access to the information, it should be made available to them,” he says.

Two drafts of the legislation have been introduced thus far. An original draft, authored by Sen. Albert Gore of Tennessee and the Democratic U.S. vice presidential nominee at the time of this writing, calls for all public government information to be placed online. A subsequent compromise measure offers only those materials published by the government’s Superintendent of Documents office. The bill’s sponsors are Democrats Sen. Albert Gore (TN), Sen. Wendell Ford (KY) and Rep. Charlie Rose (NC).

“If (Gov. Bill) Clinton is in the White House, it’s likely the Gore-sponsored bill will go back to its original form,” says Love. The direction of the bill should be easier to determine after Presidential, House and Senate elections in November.


The results of the testing of high-definition television systems at the Advanced Television Test Center (ATTC) in Alexandria, VA, are being published by the ATTC, the Advanced Television Evaluation Laboratory and CableLabs. Each of the five proponent test results will be in separate publications.

The price for CableLabs’ member companies is $1,500 for the entire set, or $350 per copy. Currently available are the results of the NHK Narrow-Muse system and General Instrument’s DigiCipher HDTVTM. Copies may be obtained by contacting Janet Martin, ATTC, 1330 Braddock Place, Alexandria, VA 22314; phone (703) 739-3850, fax (703) 739-8442.

Multimedia is special info for special audiences

David Shefrin is principal of New York-based Shefrin & Associates and founding chairman and recent past president of the Interactive Multimedia Association. A longtime television producer, writer and executive, and a member of IBM’s corporate communications management, Shefrin played key roles in the development of the Ulysses and Columbus multimedia projects. He is now focused on multimedia and visual information for education. One of Shefrin’s many claims to fame is buying the first video cassette machine in the world from Sony’s Akio Morita.

The television and mass media frame of reference is important to understanding what is happening in multimedia development. The industry focus, until recently at least, seems to have been on which technologies and tools can be developed and sold.

In the scramble to create alliances and strategies to develop multimedia, those hoping to make the new business happen have been thinking and planning without much vision. And from the focus on technology and tools has come the overwhelming idea of multimedia connected into the home — far before anyone knows how to make a product work in that context.

This focus has to do with the industry’s obsession with the merging of computer and television as they have been used to date. This may represent a necessary and important view of how to make progress. But it is a limited view of multimedia as a product for people to use. Multimedia’s promise lies beyond computing and beyond television.


In the larger context of communications media, multimedia has a new and different role from that of mass communications. Its powerful mix of multiple media allows individual direction and control of how information is used. It multiplies the attention factor and deepens the learning process and provides a new means to explore information in-depth to meet special needs and interests.

The question is, how are we meeting the opportunities of multimedia in this context? How does today’s thinking and planning by entities and individuals help to define the intersection between technology and content?

The answer is that it does not. Observers, analysts and researchers have taken up prematurely the entrepreneurial point of view that a mass market for multimedia exists, and are going after it hammer and tongs with the computer/television model firmly in mind. In fact, there is a need to move away from the broadcast-like, mass-media approach to communications. Multimedia requires a new and different “information” approach to special, focused audiences for education and communication. What multimedia can deliver is special information for special audiences — a significantly different experience from that of existing media — and this is not recognized.

We live by means of visual information but have not much understanding yet of what that means, neither in the think tanks nor among communications practitioners. And now comes multimedia to provide a powerful new means for individuals to use and understand visual information.

In the vision of multimedia as the next level of communications, we see publishers of books, magazines and newspapers, movie studios and broadcasters, cable and telephone companies and other utilities planning to emerge into the world of digital communications.

As they do, they must be careful not to follow the lead of many multimedia developers today, who in my opinion hold many parochial views of newspapers, magazines and broadcast communications as “mass media.” They seem to believe that all media are similar, and the common goal of huge sales is the right goal. But the scattered experience of television is not magic for people who require more than the visually appealing and want focus and feedback instead of fleeting image. Some of us doubt that what Marshall McLuhan’s “global village” wants to know is necessarily what individual people really want to know.

To miss this point is to ignore the trends of the past 20 years in magazine and newspaper publishing, as well as in the use of television. Magazines were first to suffer as general-interest, mass-circulation publications such as Colliers, Look and Life began to fold. Over the past 15 years, magazines have reconfigured their approach to readers. They’ve become more specialized in editorial focus, and advertising frequently uses regional inserts directed to markets based on geographic areas or niche readership.

This decline in mass appeal is happening in television, too. The networks are crying about a great decline in audiences — up to 40 percent, by some counts. Where are the people going? One place they’re going is to cable, where among other things they can find programming designed specifically for their desires: Cable News Network, ESPN for sports buffs and plenty of movie channels.


Are these limited views changing now? I don’t know. What I do observe about the burgeoning new alliances between new and old media companies is encouraging on one hand because resources are being mobilized. But it is not encouraging to hear about some of the prospective partnerships thinking in terms of a broadcast television, motion picture, video game and pop music paradigm of what multimedia is or can be.

So what happens when a new medium comes along and amalgamates the individual power of the media that have gone before and puts them all in one bag? How do you use the power they generate — do you duplicate what’s been done before? It does seem rather wasteful, yet that so-called “mass market” is what multimedia developers are talking about.

Yes, everyone will have a piece of multimedia. But that’s not the main aim. The goal, really, is to find out what these tools can do better and differently from other media over the years. Their power is apparent. We need simply to put them to work.

As multimedia business opportunities are defined, intellectual property, or program content, will become the common thread of interest. After all, the standards issue will eventually be resolved either by agreement or by some de facto accomplishments. I wonder if this will be recognized as we try to go beyond computing and beyond television to create multimedia products that will be useful to people and, therefore, successful.

The arrival next year of a number of personal digital products will suggest a new emphasis on message communications, as will the growing use of multimedia business tools. As these improve the means and cost of manufacturing and spread the word about multimedia, everyone will benefit.

But this promise also suggests a critical decision point. Will there be sufficient energy applied to making the content-based products that can best define multimedia as a unique way of presenting and using information?

David Shefrin