Digital Media,
August 1991

Digital Media
August 12, 1991
Volume 1, Number 3

- CONTENTS -

• APPLE AND IBM’S BIG (AD)VENTURE
When Apple Computer announced its four-pronged strategy to share key technologies with arch-rival IBM, the fourth part of the agreement -cooperation on multimedia technologies- seemed almost to be tacked on as an afterthought. But don’t be fooled. If ink is ever put to paper on this partnership, a collaboration between the two computer giants could have a far-reaching effect on the future of media-based computing.

There’s a reason why details on the multimedia aspects of the Apple-IBM agreement are sparse: When considering the technologies upon which the future depends, it is prudent to be very careful about what one gives away. Especially when Microsoft is breathing down everyone’s neck.

• DIGITAL MOVIES FOR THE MASSES
Andy Bechtolscheim details the Hollywood (CD) player, where one movie fits on one compact disc

• A DEAL THAT SMACKS OF GREED
Guess who is served by taxing blank tapes and recorders? Not the consumer

• STEPPING INTO VIRTUAL REALITY
Pioneers band together to launch location-based entertainment

• SECOND- GENERATION INTERACTIVE
Sony intros two new video products

• MACROMIND, PARACOMP TO MERGE
Trying to achieve critical mass, it took “two nanoseconds” to see potential

• A NEW PC PRODUCT FOR MACROMIND
Action and Opera for Windows

• VERBUM INTERACTIVE
First CD-ROM periodical is a hopeful sign

Next: APPLE AND IBM’S BIG (AD)VENTURE

APPLE AND IBM’S BIG (AD)VENTURE
Win, Lose Or Draw? In Any Case, It’s A Big Gamble

By this point, three of the four pieces of the planned partnership between IBM and Apple Computer have been dissected by industry speculators in minute detail: the cooperation for network standards, the second-sourcing of IBM’s RS/6000 RISC chip to Motorola for Apple’s use in a new line of computers, and the co-development of an object-oriented operating system and object-based application development environment.

But both firms have refused to discuss the fourth part of the agreement — their cooperation on multimedia technologies. This area certainly has the potential for visibly changing the face of computing in the coming decade. But it has equal potential for making Apple an also-ran in the market it pioneered. This dichotomy and lack of clear benefit to Apple is making multimedia watchers scratch their heads.

At a recent analysts’ meeting in Cupertino, Apple chairman John Sculley disputed claims that the two firms have been mum on multimedia aspects of the deal because they haven’t figured it out yet. He says that all aspects of the deal, should it get past the “letter of intent” stage, have been hashed out in detail.

Next: THE MORE THINGS CHANGE . . .
But insiders say that those details have changed fundamentally many times in the months since the letter of intent was signed, a revelation that casts a pall over the eventual success of the collaboration.

In the absence of meaningful specifics, there’s not much to do except continue the speculation, looking where their circles might intersect.

At best, a collaboration that truly benefits both companies does indeed have the potential for creating an enormous base of media-capable computers, providing a stable platform for software developers and an attractive product for customers. In the worst case, Apple will have given up all its juiciest technologies to its biggest competitor. The key question is how difficult it will be to conjure up a deal where Apple stands to win as much as IBM.

IBM’S MAKING MONEY
It’s a little surprising to note that stodgy old IBM, unlike Apple, already has a multimedia strategy firmly in place. Also unlike Apple, IBM claims to derive significant income from selling its multimedia “solutions” in the corporate and education markets. But the company still relies on expensive hardware and software add-ons, such as the M-Motion video adapter, the Audio-Visual Connection and its difficult-to-use InfoWindow hypertext software.

However, in October, IBM is expected to introduce a new line of computers with built-in, synchronized sound and full-motion video compression/decompression capabilities via a rejiggered DVI (Digital Video Interactive) chipset, part of IBM’s co-development agreement with Intel.

Robert Carberry, IBM’s assistant general manager of systems technology for the Personal Systems Group, has already presented this strategy in detail to more than 5,000 people at various industry conferences. The company makes no bones about the fact that retrofitting its existing computers with multimedia capabilities is not only possible, but will happen.

Investments and specifications. IBM has also set up an entire multimedia division and invested a great deal of money in developing what it considers to be viable content, or titles, which will help sell multimedia solutions in its target markets. Witness its investments in AND Technology for a five-disc educational videodisc series and its induction of filmmaker Robert Abel as an IBM Fellow.

When Microsoft announced specifications for the Multimedia PC (MPC) at its Multimedia Developers Conference in November 1990, the industry was abuzz about the fact that IBM didn’t endorse it. But what IBM did announce at the time was a co-development agreement with Microsoft on two standards — the Resource Interchange File Format (riff), for identifying and tagging data files within Windows applications, and the Media Control Interface (MCI), a standard set of commands for using videodisc players, midi devices, scanners, etc., in an application.

Next: APPLE’S SCATTERSHOT APPROACHAlthough Apple has done the most so far to introduce the concept of multimedia computing to the masses, its approach to the market has been scattershot at best.

Apple does not ship any kind of standard video input or compression hardware with the Macintosh. But so far, it still ships the software tools of choice for multimedia developers. All Macintosh computers come with HyperCard, which has become a de facto standard for driving interactive videodisc applications, and Apple has always built sound output, and now input, into every Mac.

Apple was once very vocal about the future of multimedia, to the point of showing whizzy new technology demonstrations at every industry conference. About a year ago, though, chairman and CEO John Sculley started saying publicly that the company was reconsidering its multimedia strategy “because no one is making any money at it.” Apple’s Multimedia Lab in San Francisco was drastically scaled down, and the company’s early attempts to evangelize the benefits of multimedia computers were “re-organized” out of existence.

Apple’s public withdrawal from the multimedia market both angered and worried developers who’d made strategic investments in Mac products. MacroMind, for example, has since released Windows versions of Director as has Authorware. Formerly all-Mac title developers such as ABC News Interactive and Voyager Co. are either developing or have announced PC versions as well.

Then there’s QuickTime. But within Apple, the people responsible for multimedia retrenched and started working on more powerful, system-level tools to make what it calls “media integration” easier for developers and users.

The result was QuickTime, announced in June. This powerful and well-received set of system software extensions includes a new file format standard called “Movie,” which coordinates time-based information (such as sound, audio and animation) and text; audio and video compression routines; user interface standards and guidelines; and system support for the aforementioned timing mechanism as well as device and image compression managers. (See Vol. 1, No. 1, p. 14.)

Next: WHAT’S IBM GOT?
Last month, IBM announced a new 3.5″ erasable magneto-optic disc drive and format, which it plans to market widely throughout the industry. (See Vol. 1, No. 2, p. 12.)

Capable of storing 128 MB of data and retrieving it slightly faster than a CD-ROM, the new drive has obvious implications for data-intensive applications. Multimedia developers are forced today to tote around big hard drives or use less-than-reliable 44MB removable hard disks to run multimedia presentations.

Cutting a deal to build this technology into the Macintosh and giving it cross-platform compatibility could benefit IBM by providing licensing revenues and by helping it set a new drive standard. A deal could benefit both firms by giving developers a more useful (though still slow) medium for software distribution.

Another IBM nugget is its joint venture with Toshiba for the development of color flat-panel displays. The partnership between the two firms, called Displays Technology Inc. (DTI), is already selling a 10.5″ display as a laptop option for about $3,000.

Apple has long been seeking an acceptable color flat-panel display for its laptop computers, and it will need one for any multimedia player it hopes to sell into the home market. DTI’s displays are one-tenth the weight and use one-tenth the power of traditional CRTs. Buyers are paying dearly for DTI’s learning curve on manufacturing the display; each pixel on the display is controlled by its own transistor, thus one dead transistor yields one dead pixel, which is unacceptable. But as yields go up, cost will come down.

Next: NO DEAL FOR APPLE
No deal. However, it appears that Apple won’t be gaining access to either of these technologies. In a peculiar twist, insiders say that IBM is not offering Apple a license on the color flat-panel display. Though it was indeed discussed, it was not part of the negotiations when this story was written. Similarly, the erasable optical drive is also not part of the bargain — though that’s because Apple is also in the process of evaluating Sony’s new Mini Disc erasable optical disc for use as a storage peripheral.

WHAT APPLE CONTRIBUTES
QuickTime and HyperCard appear to be the only announced technologies that Apple brings to the table in its negotiations with IBM. HyperCard is an obvious plus on the Apple side of the equation — IBM’s competing software, InfoWindow, is insanely difficult to install and use. A “HyperCard PC” would be great for IBM’s image and for its customers, but it is likely to help Apple’s finances only slightly (especially depending on who’d have to make it work on the PC). After all, how much of a bargaining chip is software that is shipped free with every Macintosh?

And whether QuickTime will benefit Apple in a potential partnership with IBM, in light of IBM’s RIFF/MCI involvement, presents an interesting conundrum. The overlap between QuickTime and RIFF/MCI looks significant, and IBM does not appear to be reneging on its OS/2 and Windows multimedia development based on the upcoming agreement with Apple.

In fact, IBM recently announced that it is shipping Windows and Microsoft’s multimedia extensions to Windows (which now include image compression) to all its educational clients — in essence, supporting the Multimedia PC specification without getting the official MPC stamp of approval. IBM has told some executives privately that it is still committed “100 percent” to implementing RIFF and MCI in upcoming system software extensions.

These differences won’t amount to much if and when the joint venture for object-based system (OBS) software yields fruit. One of the benefits of an object-based system is that it contains the inherent intelligence to utilize any object as long as it has been trained to recognize the object as part of the library.

But in the meantime, if Apple and IBM believe it’s important that their media-based applications be able to run on both companies’ platforms (which is likely — see section on content, below), they’ll have to decide how to merge or bridge the differences and similarities between QuickTime and RIFF/MCI for pre-OBS applications.

This could be easier than it looks. QuickTime was designed to be extensible enough to be able to handle any data type, so RIFF and MCI, or even CD file formats and device drivers may be a trivial job.

In any case, QuickTime is still very early in its development, which means software based on it will be unable to be shipped until QuickTime is ready. In the interim, IBM is covering its bets by making sure it supports the MPC standard. Apple obviously does not have that luxury.

So there must be something else. Considering the dearth of visible chips Apple can lay on this bet, it’s safe to assume that it is waving some unannounced technologies at IBM as part of the negotiations.

Who stands to benefit from this?

Certainly IBM does. At this point, Apple at least has the benefit of still being publicly perceived as a friendlier platform for multimedia development. If it licenses its hottest new technologies to IBM before they hit the open market, IBM’s name recognition and still-formidable marketing clout stand to usurp the profits (other than up-front licensing fees) Apple could derive from such technology.

Could the specification for Fast Eddy, Apple’s still unannounced consumer multimedia player (a la CDTV or CD-I), be one of these technologies? Though the project was supposedly killed, at this point anything is possible. But it’s hard to see how creating an instant competitor would benefit Apple, which is already having a hard enough time navigating the treacherous financial waters of lower margins on its cheaper computers. Granting IBM a license to manufacture any of its potentially competitive technologies would seem akin to shooting itself in the foot with a large-gauge shotgun.

Next: THE MPC ADVANCES, UNCHALLENGED
One obvious reason that IBM and Apple want to collaborate on multimedia is to drive a wedge into Microsoft’s rapid progress toward a standard multimedia platform. In a move that Apple should have made long ago, Microsoft has rallied PC software and hardware companies around the MPC standard, and even formed the MPC Marketing Council (see Vol. 1, No. 1, p. 17) to help educate consumers on the benefits of media-based computing.

The MPC Marketing Council also is promoting and supporting MPC as a standard for software development, based on Windows and its multimedia extensions. This is exactly what IBM and Apple are doing with their joint venture for a new object-based operating system, and why it is essential for both to agree on how to make their separate multimedia extensions compatible for multimedia developers.

However, as mentioned earlier, IBM is continuing its relationship with Microsoft on support for standard file and device control formats and so far is continuing its Windows development. One can only assume that Apple does not have sufficient clout to force IBM to stop this type of support if and when an agreement is reached. Of course, there’s nothing stopping Apple from working with Microsoft to merge RIFF/MCI with QuickTime, but with present levels of animosity and distrust between the two companies, it’s not likely to happen any time soon.

SO, (WHERE) DOES COLLABORATION MAKE SENSE?
Making content compatible. It’s become a cliche in the world of multimedia that “content is the key” to the success of digital media-based products. As any multimedia developer knows, creating even one version of a title is a staggeringly complex and expensive task; porting a media-intensive title to another computing platform, with the present (tiny) installed base of media-capable computers, is simply a waste of time.

Thus the benefits of deep compatibility between Macintosh and IBM computers are formidable. A phalanx of multimedia computers from IBM, combined with a few million media-ready Macintoshes, would create an enormous, attractive base of media-capable computers that would give the MPC some very serious competition for developer attention.

Both companies seem to be aware of this fact; one widespread rumor is that they are discussing the licensing of content from Time-Warner (as is Microsoft, allegedly). If true, such a licensing agreement could signal the beginning of a separate and powerful new multimedia publishing group based on the model being pioneered by Microsoft Press today. Though sources say this is not presently being discussed, it’s certainly one way for Apple and IBM to leverage their combined clout into a profit center that would clearly benefit both companies.

The formation of a separate company to manage the entire IBM-Apple multimedia partnership is a likely result of the present negotiations. Whether this will be successful from both companies’ perspectives depends on your degree of cynicism.

Though the other three parts of the IBM/Apple agreement prove Apple is clearly not abandoning its interest in the high-end business market, its recent formation of a consumer products division has clarified its intentions to sell smaller, more mobile machines that take advantage of the growing world of digital media and the convergence of computing, communications and content.

Next: WHY NOT SONY?
Why not Sony? In fact, there are those who say Apple would be much better served by forming a partnership with Sony, which does not pose the direct threat to Apple’s present business that IBM does. Such a partnership does seem to make far more sense.

And although IBM says it also is evaluating consumer technologies including CD-I, the company has made no bones about its intention to shore up tottering market share in its traditional, bread-and-butter business computing products in all areas from mainframes to personal computers.

A divergence of interests. A cynic might say that this divergence of interests — Apple’s new focus on consumers, and IBM’s renewed focus on business -highlights corporate cultural differences that make it impossible for IBM and Apple to collaborate effectively on multimedia technologies. In addition, it’s clear that IBM stands to gain the most from this agreement: hooking its name to Apple, in the multimedia arena where Apple is widely perceived as a technology leader, lends a certain cachet to the stodgy IBM name.

At the aforementioned analysts’ meeting, Sculley acknowledged that as hardware gets cheaper and computers become more interchangeable, software will cast the swing vote in a commodity market. Thus compatibility between multimedia technologies is a vital link to new customers for either company.

But an optimist (or a skeptic, depending on whether you think carving up the pie this way smacks of collusion) could counter that such a divergence in basic market interests will make it possible for Apple and IBM to differentiate their future businesses in a significant way, while cooperating in the areas — such as creating enterprise-wide, “open” computing and networking standards — in the business market, where it makes the most sense, and where competition is inevitable.

Next: BUSINESS AS USUAL
Right now, however, the downside of the potential collaboration — especially from Apple’s vantage point — is significant. Despite a heightened emphasis on software, Apple is still, after all, in the business of selling computers. There is no “prior art,” so to speak, on whether IBM is really serious about abandoning its former “we want it all” ways and will strive to cooperate, not dominate, in the larger world of heterogeneous computing.

The converse is also true. As the unproductive Apple/Digital Equipment partnership proves, Apple isn’t exactly expert at such cooperation. It has for years based its business strategy on a proprietary architecture, unlike IBM. An ample serving of humility on both sides is in order.

It’s also important not to underestimate the fact that Microsoft has a multimedia specification, working and in place, which it is marketing the hell out of. Though the potential fruits of cooperation are formidable, developers are likely taking a “wait and see” attitude about working with any Apple-IBM venture until their joint multimedia strategy is more clearly delineated.

Their bets are covered. In the meantime, Microsoft is signing up developers right and left, while IBM has clearly covered all its bets. Despite a deepening rift between the two firms, IBM is continuing its cooperation with Microsoft (in multimedia, at least) and is withholding at least one key technology — the color flat-panel display — from the Apple negotiations.

Even if an agreement between Apple and IBM were signed today, the joint venture — as well as Apple’s present business — would be hurt in the short term by giving the MPC more lead time to establish itself as an industry standard.

But if Apple can come up with a compelling way to make this agreement work for it as well as it works for IBM, such a collaboration could yield what everyone in the market has been praying for: a cutting-edge, industry-standard platform for developing and deploying media-based applications. Unfortunately, by the time they deliver it, there will be two such “standards.” Business as usual.

- Denise Caruso

Next: Andy Bechtolsheim specs a digital movie player
DIGITAL MOVIES FOR THE MASSES
A Hollywood (CD) player is where it’s at

The following article is based on a presentation by Andy Bechtolsheim, co-founder of and vice president of technology for Sun Microsystems, at the Digital World conference in June. There, he detailed his concept of a digital-movie player based on technologies that exist today. Bechtolsheim opened by saying, “Sun will not build this, but if I were Sony or Matsushita, I’d be crazy not to.” Considering his track record, it would have been pretty dumb not to listen. The audience responded in kind — nearly every hand in the house shot up when he asked if they would buy such a player for $1,000.

We are here [at Digital World] because of the success of digital compact discs and digital memories, not because we have been waiting for digital transmission technologies.

Perhaps the single most important opportunity in digital media at this time is to set a standard for high-quality digital video and to deliver mass-market products based on this standard. Similar to the great improvement in sound quality achieved by digital CD-audio, there is clearly a major consumer demand to view high-quality movies in the home.

The player I’m about to describe could be available in two years. While a digital video player does not reduce the future potential of digital television broadcasting and cable, it is not dependent on digital broadcasting either. Thus it will deliver the benefits of digital, high-quality movies without changing the infrastructure required to deliver them.

There are a number of bottlenecks to widespread acceptance of digital television broadcasting. One is the regulatory bottleneck of the U. S. Federal Communications Commission to select a standard. Another is a capital bottleneck: broadcasters will upgrade to expensive digital equipment slowly, only when there is sufficient consumer demand for digital TV. And on the telecommunication side, there is a transmission bottleneck because our homes are wired with copper cables instead of high-bandwidth, fiber-optic cable.

Next: Digital video before digital TV
Because of these bottlenecks, digital-media video will achieve market penetration a long time before digital television. The question is, “How long?” My estimate is that it could take five years until digital television will achieve the same installed base as digital media video systems.

Digital media technology is here today. It costs only $1 to produce a compact disc. The distribution channels are well established. Retail outlets such as Tower Records will sell the video CDs, the Blockbuster Videos of the world will rent them, and stores such as Circuit City will do a brisk business in the sale of digital video disc players.

It’s obvious that standards are good, but in the end, great products are better. A high-quality digital video player could be launched by whoever has a dominating market share to set a media standard. This requires strength in both movies and consumer electronics. Two companies, Sony and Matsushita, already have both in place. Now we need the product — a high-resolution VCR replacement to display high-quality, digital movies.

The player’s most important purpose is to play digital movies. If it can also be a video game machine, that’s even better. But it’s not a computer. It must be a mass product from the start, since mass products are what set standards. It also must cost $1,000 or less.

What will make this product possible is the next generation of CD technology, already in the labs, which will have the storage capacity required for high-quality digital video. The level of resolution, which needs to be defined, will be somewhere between today’s NTSC standards and the HDTV format. Of course, the exact format depends on the storage capacity of the media, the video compression achieved, and the display resolution and sample rate. In my formula for the player, I am assuming a display resolution of 9605540-pixel non-progressive scan at 24 frames per second.

The perceived quality of such a display will be about one-half of interlaced HDTV and four times better than NTSC. This level of quality seems to be a good choice. Whatever the final standard is, it must be set with care, since media standards generally last 20 years or more.

Next: What are the applications?
Besides playing movies, other applications are likely to include games, books and interactive programming; these will require programming interfaces, additional data definitions and so forth. But what’s more important is to set the standard for high-quality digital movies on compact disc.

All of today’s CD-based products are red-laser-based. To establish a new media definition, it’s critical to use the best technology that you can get in quantity. A green-light laser technology, which I call cd2, pro-vides twice the density of today’s CDs, and is already close to market. The best technology available in the labs is blue-light laser, cd4, which allows four times as much data on a CD as the present red-light laser technology.

APPLICATION REQUIREMENTS
The media specifications for a digital video player are determined by the application requirements for inter-active tasks such as audio, video, movies, electronic books, images (such as an image of a page) and 35mm photos.

For CD-quality audio, sound must be sampled at 44 khz. Compressed at 5:1, it requires 30 kb per second of playback, which achieves very good sound quality as demonstrated by the new Sony magneto-optic Mini Disc format that uses this level of compression. To get surround sound will require an extra channel, for a total of about 50 kb per second.

For NTSC-like picture quality, let me assume a resolution of 4805270 pixels and the ability to store the data at 24 frames per second, the rate at which most movie material is recorded originally. With 100:1 mpeg compression, to play one second of this type of video requires 62 kb of storage.

Clearly we must have much higher quality than NTSC in this product. Four times the resolution of NTSC requires 9605540-pixel resolution. Using the same data rate of 24 hz and 100:1 compression, to play one second of digitized film stock requires 250 kb of storage.

(Note: MPEG isn’t yet quite good enough to do high-quality compression at a 100:1 ratio. This seems like an achievable goal but will require more work. Also, audio isn’t included in these figures — that needs to be added on top. For movie and surround sound audio combined, the player would need to be able to have a data rate of 300 KB per second.)

Next: Electronic books, images, photos.
If the digital movie player can handle 300-KB-per-second playback, then 30 kb can be displayed per “page” (read: screen) at a rate of 10 hz, which is an acceptable page-turning time. This is plenty for textual information, since a single text page requires about 10 KB of information and even less with compression. A full-page image compressed 20:1 with JPEG requires about 30 KB, which still satisfies the 10-hz page-turning goal. Photographic-quality images, such as the Kodak Photo CD provides, require much more storage — about 6 MB per photograph. With 20:1 JPEG compression, the image requires 300 KB of storage. This allows users to view photographs at one picture per second, although subsampled photographs could be viewed at movie rates.

WHY PUT MOVIES ON CD?
The advantages of mastering movies on CD become obvious when compared to other media. A floppy disk, though it costs the same as a CD (about $1), is too small for anything. Sixty minutes of audio on a floppy would cost $100; 120 minutes of a digitized film-quality movie, $1,000.

One movie, one disc. Magneto-optical, rewritable discs are much more expensive — $10 each — and though densities are increased, it would still cost $160 to master a 120-minute movie onto an MO disc. Even if a single MO disc could hold a full movie, studios would not be anxious to use a writable medium — it would make consumer copying much too easy. Read-only media is the preferred media by the software industry. In addition, MO discs are not compatible with the audio CD format.

Compact discs are the most cost-effective digital media today, with a manufacturing cost of about $1 per disc. The main limitation of today’s CD format for movies is the limited capacity. Assuming a 300-kb-per-second data rate for high-quality digital movies, an audio CD would hold only 30 minutes of movie material. Four or five discs would be required to hold a feature-length movie. CD2, the double-density green-light laser format, still doesn’t achieve the goal of one movie-one disc. Only the blue-light CD4 has sufficient data density to fit an entire movie on a single disc.

Next: COMPRESSION AND DISPLAY ISSUES
The MPEG standard, still being refined, will include several different resolution and sampling rates. The trick for this product is to make the right choice about the key parameters that provide the best display quality to the consumer while meeting the constraints of the available media storage capacity and data rate.

Square pixels are absolutely essential for optimum data storage, compression and compatibility with future computer applications.

As mentioned earlier, there is a big advantage to storing video on digital media at 24 hz noninterlaced (progressive scan), since this is how virtually all movies are originally recorded. Any digital system requires a frame buffer for decompression, so the obvious approach is to refresh each picture three times at a 72-hz screen refresh rate, which also solves the flicker problem that is present in the NTSC format. In comparison, a 60-hz interlaced NTSC approach wastes bandwidth, makes compression more difficult, reduces picture quality and causes refresh flicker.

Players must offer scalable resolution to support both existing NTSC/PAL-format displays and the new digital video format. In addition, to make such a system forward-upgradeable, attention must be paid to upcoming digital video transmission standards.

THE PRODUCT LINEUP
The following product lineup reflects the above requirements.

* 28-inch digital video display — $1,195
Noninterlaced, 28-inch CRT, with 16:9 aspect ratio
960×540 resolution with square pixels
Digital video input from CD4 player
Analog TV receiver for NTSC/PAL viewing with “letterbox” capability for NTSC/PAL

* Stationary player for home use — $595
CD4 video/audio player
Optical digital connection to digital video display

* Portable digital video player — $1,195
Noninterlaced, 6-inch color TFT LCD display with 16:9 aspect ratio
480×270 resolution for subsampled video
CD4 video/audio player in same enclosure
Optical digital connection to digital video display

THEY HAVE THE TECHNOLOGY
The technology exists now for a low-cost, consumer digital video player. Though Sun is not planning such a product, several Japanese companies have the technology to make such a product, independent of standards, regulatory bodies or broadcast issues. I expect to see this kind of product on the market within two years.

Andreas Bechtolsheim, with Denise Caruso

Next: Taxing the medium
A DEAL THAT SMACKS OF GREED
Guess who is served by taxing blank tapes and recorders? Not the consumer

In late July, acting on orders from the Federal Judiciary, the consumer electronics and recording industries agreed on a package that would effectively end a four-year battle over consumers’ rights to make digital audio recordings at home. At issue are the millions of dollars that the recording industry feels it is losing in sales as a result of the home recording of music.

Home recording has always been a source of animosity between recording artists and electronics manufacturers. In the past, the only copies were analog and therefore imperfect. But the ability to create “perfect” copies, which digital technology allows, has led the music industry to demand protection or compensation for lost sales as a result of “pirating” (large-scale illegal copying and selling of titles) and home copying.

Next: DAT TAKES FIRST HIT
When Sony announced the Digital Audio Tape (DAT) format and recorders five years ago, the music industry made so much noise that Sony delayed shipment of recorders into the U.S. until this past year. Most record labels refused to release titles on the format.

Even Sony Music stalled at releasing titles on DAT. No other company even entered the domestic market. This made for slow sales, high costs and little, if any, consumer support. Only musicians and audiophiles paid the premium. (DAT technology also developed a secondary market as a computer storage device, primarily for tape backup.) In the meantime, Philips and Tandy developed DCC as a competing technology, and Sony came out with a new medium, Mini Disc (see Vol. 1, No. 2, p. 19).

Then Sony got sued. Last year, the National Music Publishers Association (NMPA) sued Sony, seeking an injunction that would force it to stop marketing the recorders unless it were willing to pay royalties to the artists NMPA represented. The consumer electronics industry was adamant that it would not pay royalties on its technology to artists. Caught somewhere in the middle was the consumer, to whom all options cost money.

Congress has been loathe to get involved in such an issue, lest it upset industry (a potent lobby) and/or the consumer (who does, after all, elect members of Congress). Numerous bills have been proposed with no real action taken. Finally the courts ordered the various parties to work out a deal between them and propose it to Congress. They have now hammered out the deal.

Parties to this agreement include the Recording Industry Association of America and the NMPA, representing the recording industry, and the National Association of Retail Dealers of America (NARDA) and the Electronics Industry Association’s Consumer Electronics Group, representing manufacturers. It is said to have the support of the major artists’ unions and associations. The entire deal was brokered by John Roach, chairman of Tandy, the largest consumer electronics dealer in the country and a contributor to the DCC specification.

The deal calls for manufacturers of digital recording equipment to pay a royalty of 2% of the wholesale price on recorders and 3% on blank disks and cassettes. There is a minimum tax of $1, and a maximum of $8 for a single tape deck and $12 for a dual cassette recorder. These monies would be deposited into a pool, to be distributed to composers and artists based on record sales and airplay.

All hardware manufacturers would be required to include the Serial Copy Management System into all devices. The SCMS would allow only a first-generation copy, adding an inaudible track to the copy that makes it impossible to duplicate again.

Ironically, this digital solution provides far better protection than exists today in the analog world, where tape-to-tape decks are sold freely with no copy protection whatsoever. Yet no royalty tax has been placed on blank analog tapes. One might ask why a tax on digital tapes is necessary when SCMS eliminates the problem of multigeneration copying.

Most manufacturers had voluntarily agreed to include SCMS in their recorders even before this agreement was reached. The suit against Sony will also be dropped (although it can be reinstated if things go awry).

Most significant, perhaps, is that such an agreement, if made law by Congress, would establish the legal right for consumers to make audio tapes, analog or digital, and would prevent any future copyright infringement suits for lost income against the manufacturers and marketers of digital recording equipment and blank media.

Next: WHO DO THEY THINK THEY’RE KIDDING?
The entire deal smacks of greed, long a character trait of the entertainment industry. The recording industry has already responded to the loss of revenue from taping by raising the price substantially on prerecorded analog tapes. This higher pricing, of course, lowers the incentive for consumers to buy tapes from the record companies and, instead, raises the incentive to copy.

Shaft. Can you dig it? As consumers, we must be aware of the ramifications of copying other people’s work: recording artists are compensated for the number of records sold. On the other hand, it’s bad enough that consumers are still paying a premium price — generally around $15 per disc — for audio CD technology that costs $1 per disc to produce.

Those people who do not copy CDs onto blank analog tape should not be penalized for the actions of others — which at the present moment, they certainly are. Now the record companies have it both ways: higher prices on prerecorded tapes and records and a royalty on blank media.

Consumers will also be paying a premium, in the form of a price differential, for purchasing digital equipment. This will create a permanent price disparity between analog and digital equipment and will ultimately slow the acceptance of the new technology. While some experts believe that the royalty will not affect the retail price for equipment that today costs $800-1,200, what happens when volumes increase and retail prices drop? To an industry that is made or broken on $25 price differentials, this could be significant. The cost to the consumer of blank media is expected to rise approximately 25˝ per tape.

There’s just no excuse. It is our opinion that the music industry’s concern over piracy was more than adequately answered with the SCMS specification, which was being voluntarily implemented without any royalty agreements.

The issue of perfect copies is no longer even valid. Sony conceded that DAT was essentially dead as a consumer product. After Philips and Tandy announced DCC, Sony produced the Mini Disc, a recordable disc that holds the same amount of music in half the size of traditional CDs. However, both DCC and Mini Disc utilize a compression algorithm that loses a small amount of information each time a copy is made, thereby making perfect duplication from copy to copy impossible. (For a full comparison of the three formats, see Vol. 1, No. 2, p. 19.)

The computer software industry faced a similar problem a number of years ago. Software manufacturers were spending a lot of money to place complex copy-protection code on their applications to prevent duplication. But they found that it was much more economical to remove all copy protections and lower the cost of the software to promote sales!

Guess what? It worked. Producers were happy. Consumers were happy.

But thanks to industry greed, the royalty agreement seems to be the only way that digital technology will ever get into consumers’ hands. The entire issue has become so emotional over the years that no logical conclusion now seems possible.

Next: THE IMPLICATIONS
Anyone working with digital media needs to be aware of such battles. This particular battle is being fought on a limited field: only digital audio reproduction is at issue. Analog audio seems to have escaped unscathed at least from a royalty tax (though the cost of pre-recorded tapes has gone up about ten percent in the past six months), and implicit in this deal is the agreement not to include video or computer data.

But the groundwork is being laid for just such future battles. How much longer will we be able to differentiate digital video from computer data, or a movie from a software title? The Motion Picture Association of America (MPAA) has been watching this deal take shape with great interest. You can be sure that it will be looking for its own piece of royalty tax when digital video arrives to the home.

- David Baron

Next: Virtual reality pioneers aim at entertainment
STEPPING INTO VIRTUAL REALITY
Pioneers band together to launch location-based entertainment

Four of the premier companies working on technologies for virtual reality have banded together in an alliance designed to launch the nascent technology sector into the world of real products.

Virtual reality, or “VR,” is the buzzterm for the creation of synthetic, simulated experiences and their delivery systems, such as headgear and data gloves.

These experiences come in two forms: virtual environments, immersion in a computer-generated “reality”; and remote presence, which uses remote cameras and sensors to allow people to “be” in a real place that’s physically distant.

Spearheading the new alliance is Palo Alto, Calif.-based Telepresence Research, founded in 1990 by Scott Fisher and Brenda Laurel. Fisher was director of the Virtual Environments Workstation (VIEW) project at the National Aeronautics and Space Administration’s Ames Research Labs for five years; Laurel is a 15-year veteran of the software industry, including the seminal Atari Labs in the early 1980s.

Next: A Telepresence Powerhouse

The Telepresence Alliance is one of the most hopeful signs for VR since the hype began a couple of years ago. Its members have developed seminal technologies or processes that have been used in VR and in applications ranging from the entertainment industry to aerospace. Together, they cover all the technology bases required to make VR a success.

Crystal River Engineering of Groveland, Calif., first designed and manufactured a 3D digital audio system for the VIEW Lab at NASA Ames. A critical component of VR, its system, called the Convolvotron, simulates the acoustics of a room and allows the listener to experience sound in virtual space the same way that we do in the real world. Thus, a computerized object’s sound diminishes as it moves away from the listener; when the listener turns away, the sound changes proportionally to whichever ear is closer to the object.

Fake Space Labs of Menlo Park, Calif., specializes in display technology and software. Believing that the much-ballyhooed head-mounted viewers are too cumbersome and don’t provide a very high-quality image, Fake Space invented the Molly, a remote camera platform, and the BOOM, a high-resolution, free-standing stereoscopic display that lets viewers move freely in environments generated either by computer or by camera without “suiting up.”

Michael Naimark and Co., based in San Francisco, is best known for its work in “surrogate travel” via video maps of Aspen, Paris, San Francisco and Karlsruhe, Germany. Interactive multimedia and videodiscs and alternative display environment designs are the firm’s specialties, and much of Naimark’s work has been done for the world’s finest museums.

A few names on their combined client roster are Lockheed, Bell Communications Research, Stanford Research Institute, Walt Disney Imagineering, Lucasfilm, City of Paris, the U.S. Armed Forces, MIT Media Lab and National Geographic.

At last month’s Siggraph show in Las Vegas, Alliance members were responsible for four of the 16 VR installations within the show’s new “Tomorrow’s Realities” exhibit.

Next: ‘Location-Based Entertainment’
VR has been (prematurely) credited with the ability to enhance all forms of human experience, from creating a drug-free “acid trip” to helping architects design buildings more effectively. Today’s virtual realities rely on hardware and software systems costing hundreds of thousands of dollars.

But Alliance members are pioneering what they see as the first viable commercial application for these virtual environments: location-based entertainment, such as high-tech theme parks and museums.

Becoming the game. Virtual-reality theme parks are considered to be the hottest growth area for VR/simulation technologies today. In Japan alone, say Alliance members, 200 such parks are already in the planning stages. Such disparate Japanese interests as department stores, airlines and furniture makers, all of which want to catch the next big entertainment wave, are researching investments in VR theme parks.

Smaller than traditional U.S. theme parks such as Disneyland, venues for location-based entertainment are more akin to video arcades. Fisher says that video-game developers such as Sega, for example, are already planning arcades with VR installations in mind. In a world that seems demonically possessed by video gaming, such theme parks would allow gamers the ultimate experience: to become part of the game itself. One such arcade recently opened in Chicago.

Pay as you play. “Four years ago, a (video game cabinet) cost $5,000 to $7,000,” says Laurel. “Many of today’s cabinets are already using motion platforms and VR-style technologies with six-figure price tags.”

Placing VR in public places that charge admission fees, say Alliance members, is a logical way to make such installations pay for themselves and to iron out the kinks that keep the cost of VR too high for one-on-one installations. “The throughput of bodies can be smaller” than a Disneyland-style park, says Laurel. “We can build to a $250,000 platform which will pay for itself quickly enough.”

Other possibilities include the ability to, say, “swim” around the Great Barrier Reef from the mainland without touching a drop of water. Museums are also exploring the possibilities of creating environments to allow visitors to “virtually” wander through the solar system, planetarium style, or through dioramas such as those often found at natural history museums.

Next: NO MORE STUFFING BUFFALO
The cost benefits to museums, which are typically strapped for cash and/or raw material, are formidable. “It’s easier to copy a disk than to stuff another buffalo,” says Michael Naimark.

Despite the Alliance’s obvious talents, it seems that investing big research dollars to launch a new entertainment medium in a sliding global economy would be more than a little iffy. But Alliance members say the research money is out there, and they are undaunted. “The bad news is that the economy is bad,” says Mark Bolas of Fake Space. “But that always means that the entertainment business gets better.”

- Denise Caruso

Next: Sony’s second-gen interactive video
SECOND-GENERATION INTERACTIVE
Sony intros two new video products

Two divisions of the Sony Corporation of America, Sony’s marketing arm in the U.S., have announced new products that aim directly at users and producers of interactive video. The first, from the Computer Peripheral Products Company, is a frame-accurate Hi-8mm video deck with enhanced functionality and complete computer control. The second, from the Multimedia Systems Group, is a multiformat disc player (called a “combi” player in the consumer world) with an rs-232 computer interface.

A VCR and a computer peripheral. The new Hi-8 deck, called the Sony Vdeck video drive (CDV-1000), was designed from scratch as a computer peripheral; Sony has removed all of the control buttons from the front of the box, leaving all functions to be controlled by the computer.

The Vdeck utilizes Sony’s VISCA (Video System Control Architecture) protocols. VISCA allows up to seven video devices to be daisy-chained through the serial port of a computer. VISCA is platform independent. Software to control the devices via computer will be included in specific applications; Apple, MacroMind, HyperPro, DiVA and approximately 50 others support VISCA within their products.

The first VISCA product was the Vbox controller, announced earlier this year, which allowed a user to drive any Sony video device utilizing the LANC controller (a consumer industry standard protocol) from a computer. So, in effect, the Vdeck has an internal Vbox.

The VISCA technology and accompanying product line, including the Vdeck and Vbox, provides the basic tools and functionality for producing and presenting information on video using a personal computer.

The benefits of Hi-8mm. The development of different markets for the 8mm and Hi-8mm formats has been an important focus for Sony over the past few years. The phenomenal success of 8mm camcorders (ten million have been sold by Sony alone to date; sales of three million are expected this year) has encouraged Sony to look for new ways to exploit the format’s small size and high quality.

Hi-8mm video has almost twice the resolution of standard VHS video tape (400 lines of horizontal resolution, as opposed to approximately 220 lines for VHS). In addition, Sony has left room on the tape for significant additional features, including time and date codes, tape and segment labeling and an additional 8-bit digital audio track. Sony maintains that the function list of Hi-8mm tape is still not fully developed.

The Vdeck can utilize all of these features, even on previously recorded tapes. In addition, users can create single-button video presentations in which the tape will automatically pause at predetermined points, and begin again by hitting the play button. It also has a built-in audio/video switcher that allows the user to switch independently between an additional three audio and three video sources.

Blurring the lines. Sony is correct in its assumption that, for people creating their own content, 8mm is a much more useful format due to ease of use and the lower cost of camcorders. The Vdeck, which is designed as a computer peripheral, straddles the line between professional and consumer products. The 8mm format was initially conceived for consumer markets (albeit with hooks to, and the quality necessary for, professional applications). Sony has added professional functions and capabilities, such as time code and computer control, to a consumer-based product.

Rob Haitani of Sony’s Computer Peripheral Products says the Vdeck can be used for video business presentations. The presenter, he says, creates the video using desktop video tools, including the Vdeck and consumer decks controlled through Vboxes. Segments are put into proper sequence; voice-overs and background music are added; graphics or animations created on the computer can be inserted. (The latter step requires additional hardware to convert the RGB video of the computer to the NTSC video of television.)

He or she then takes a conventional 8mm (non-Hi8) dupe of the tape and a Sony Video Watchman to the presentation site. (Total weight: about 1 pound.) The video is viewed on the Watchman (if it is intended for two or three people) or plugged into either a monitor or a display system for larger groups. Of course, if the tape is to be widely distributed, it can be duplicated onto the more common VHS tape.

The competition. Sony’s main competition is the NEC PC-VCR. NEC’s $2,000 s-VHS deck, released earlier this year, also has a serial interface, time coding and tape labeling. However, it lacks the other capabilities, including the ability to work with more than one video device at once. NEC has promised a Vbox-type device that controls up to four PC-VCRs at the same time. While Sony has not yet announced pricing on the Vdeck (official word will come in October), it is likely to be competitive with NEC’s product.

The MDP-1100 Multi Disc Player. The second product comes out of Sony’s Multimedia Systems division of the Business and Professional Group, which is chartered with developing new channels, applications and uses for existing Sony technology.

After a long, slow growth period in which Pioneer seemed to be their only supporter, videodiscs are now becoming popular in education and business markets. Software applications that require random-access full-motion video are increasingly successful, especially in the education arena. The state of Florida, for example, recently completed a deal with Pioneer and ABC News Interactive to purchase a large number of laserdisc players along with ABC’s Health Series of interactive videodiscs. Business training and informational services are also beginning to rely on such technology.

At the same time, the advent of the low-cost “combi” player (which will play 4.7″ digital audio CDs as well as 12″ analog video disks) has brought a modest but respectable upsurge of sales in the consumer market. Unfortunately, the consumer combi players do not include the rs-232 interface required for interactive applications. Nor can they access the second audio track available for multilingual titles or audio commentary.

The industrial players do include an rs-232 interface and can access the additional audio track, but they are not combi players and are not available through consumer channels. (The industrial and consumer players are supplied by different corporate divisions that rarely talk with each other.)

Success: the mother of competition. Combi players now sell through discount consumer electronics outlets for $400 or $500. The least expensive (and therefore most popular) industrial player has been the $750 Pioneer 2200.

Sony’s new $795 MDP-1100 Multi Disc Player is the first product to bridge the two markets. It is a full-featured machine, capable of playing 12″ and 8″ videodiscs, 4.7″ and 3″ audio CDs, and 4.7″ CD video discs. But it also includes a jog-shuttle control and (hallelujah!) an rs-232 interface.

All the disc formats can be controlled by a personal computer, through software licensed from Sony and built into any given application. (The MDP-1100 does not use the VISCA protocols, as it was developed before the Vbox. We can assume that future versions will support VISCA.)

Developers can also use a collection of HyperCard X-commands sold by the Voyager Company of Santa Monica, Calif., to build their own applications. The MDP-1100 also provides optional bar code readers, compatible with the Pioneer Laser Bar Code System, allowing users to program the player easily by scanning a bar code included in a title’s reference manual. It includes a particularly nice remote control, which even technophobic instructors, consumers and teachers should be able to master.

We’re happy to see this. We have been begging for a laserdisc player like this for some time. It is such a logical product that we do not understand why no one has introduced one sooner. The ability to play any sort of audio or video optical disk in the same machine has obvious benefits for education and industrial markets. The ability to hook up your computer to your laserdisc or CD player is an equally obvious plus for the home market — as is the ability to access the second audio track.

Distribution plans. Although Sony does not plan to sell the MDP-1000 through its normal consumer electronics channels, it will, for the first time, offer the player as a computer peripheral to be sold through computer channels. The MDP-1000 will also be sold by the Voyager Company, by Optical Data Corp., and by suppliers to vertical markets such as education, government and industry.

Voyager is so pleased with the new machine that it hosted the first public showing of the Sony player in its booth at Macworld and will be selling the box through its direct mail channels. One of the first producers of interactive titles, Voyager developed the Criterion Collection of videodiscs, a select library of important and influential motion pictures, some of which contain additional audio tracks of commentary.

In addition, Voyager has produced the CD Companion series of HyperCard stacks, which accompany audio CDs and educational and informational videodisc/HyperCard stack collections. As mentioned, it also sells a HyperCard toolkit for controlling a number of video devices, including the MDP-1100. The MDP-1100 provides one affordable box that can play all its titles.

Media user’s and developer’s delight. Sony has done its homework in developing the Vdeck and multidisc player. These two products are significant additions to the tool sets of anyone interested in media integration. They go a long way toward providing functionality and media accessibility at affordable price points for multimedia developers and users.

While Sony is not the first with such devices, these two products are more complete and well thought out than their predecessors. Digital media producers and users are now beginning to see a second generation of media integration products -products that directly address their needs.

David Baron

Next: Two multimedia pioneers merge
MACROMIND, PARACOMP TO MERGE
Trying to achieve critical mass, it took “two nanoseconds” to see potential

In a move designed to create critical mass in a market filled with small, struggling software companies, two San Francisco-based multimedia software companies — Macromind Inc., of Director and MediaMaker fame, and Paracomp, known primarily for its Swivel 3D and FilmMaker products — have signed a letter of intent to merge.

Both companies claim the move is indeed a merger and not a thinly veiled buyout. The new company will be called Macromind/Paracomp, already dubbed “MMPC.”

Two months, start to finish. Tim Mott, president and CEO of Macromind, said what began as discussions about joint promotions only two months ago turned almost immediately to the idea of a merger. According to Mott, it took their respective boards “about two nanoseconds” to realize the potential in such a deal.

Paracomp brings to the table its expertise in 3D graphics construction and animation, and Macromind provides its experience as a pioneer in media integration tools. For the “creative professional,” MMPC will continue to deliver tools for graphics, animations and multimedia presentations.

Though both companies’ products are well regarded within the industry, it’s widely known that they have found it difficult to be profitable in the nascent market for multimedia and graphics software. Macromind, in fact, now supplements its Director revenues by leading in-house training classes for new customers.

A changing world. In addition, competition is beginning to heat up for Director- and FilmMaker-style products; competitors are hot on the trail of creating easier-to-use multimedia authoring systems. Since 90% of Director and FilmMaker owners use both products, it was natural for Macromind and Paracomp to see the virtue in combining their reputations and resources to counter such threats.

Those more critical of the move believe that combining product lines is an easy way to avoid the higher cost of innovation while creating a larger market presence. But with MacroMind’s brand-new product and underlying architecture (see sidebar, page 19), that point may be hard to prove.

However, both companies have been struggling financially for years, and it’s said they want to take the combined company public some time next year. To do so, their balance sheets must look significantly better than they do today.

Product consolidation. While the initial opportunities will be in joint sales and marketing ventures, the two companies will ultimately combine their particular spheres of influence to create new products and solutions for multimedia producers.

Mott says that there are no immediate plans to scrap any products or merge them into a single entity. He explains that MacroMind’s research shows that creative professionals who use products such as Director and FilmMaker generally use same-genre products from many suppliers because they “understand the subtleties between programs, and will use whichever is right for the job.”

What’s closer to the truth, however, is that no firm making multimedia authoring tools has yet completely understood the needs of its users sufficiently to provide all the necessary functionality in a single product. We hope the combined knowledge of the two firms can remedy that problem.

Mott acknowledges the probability that MMPC will develop, over time, a more streamlined product line. That’s good news since no one, we believe, really wants to use two incompatible authoring systems. And MacroMind 3D is already functionally compatible with Swivel 3D Professional.

Who’s at the top? Mott will hold the title of CEO of the new company, while Bill Woodward, currently CEO of Paracomp, will serve as its chairman. They will share a so-called “Office of the President.” Mott will initially be responsible for international operations, finance and product development. Woodward will be responsible for U.S. sales and marketing.

Senior management will consist primarily of MacroMind veterans, including VPs of product development and marketing, as well as the CFO. The VP of sales comes from the Paracomp side.

Mott says that the final agreement will be signed in early August. MMPC plans to move into shared quarters by year’s end.

David Baron, Denise Caruso

Next: ACTION AND OPERA FOR WINDOWS
A NEW PC PRODUCT FOR MACROMIND
Action and Opera for Windows

One way Macromind president Tim Mott hopes to jump-start MMPC into the big time is with Macromind’s just-introduced Action product, a Windows presentation program that creates presentations on the fly using Opera, Macromind’s brand-new multimedia engine. Sources say Macromind has projected 80% of next year’s revenues on Action.

Action was designed specifically for the $200 million PC-based (as opposed to Macintosh) professional presentations market. Mott says that the $495 product, in development for a year and a half, produces the same results as a simple Director presentation, “only cheaper and easier.”

Opera makes its entrance. Action automates into a single step many of the basic functions of a presentation: growing bar graphs, bullet charts and the like. The product will import Excel graphs and create individual objects out of each bar, making such animations easier to create. MacroMind claims nothing on the market today automates business presentations to this degree.

The important news here is the Opera engine, which is a time-based, object-oriented architecture. Under Opera, each piece of a presentation is treated as a separate object; a line of text, a graphic, etc., can be controlled or edited from any one of a number of different views or edit lines.

The analogy behind Opera is this: every object in a multimedia presentation, like every character in an opera, makes an entrance, does something on stage and exits, all on very specific cues. Thus, all parts of the object’s “stage time” are individually accessible. Since the engine is based on a real-time clock, all pieces of the presentation are guaranteed to maintain the proper synchronization.

Director-compatible. Almost by definition, Opera is extensible. Ultimately, says Mott, the interactive functions of Director will be added. In addition, adding video to a presentation will be as easy as adding a video object to the library. While this is not yet a function, Mott says that plans to make it one are in place.

Opera is a core technology for MacroMind’s success, and future generations of Director-type products will certainly be built upon it. In the case of Action, however, Macromind has its work cut out to convince consumers that the presentation product differs in a fundamental way from popular products already on the market, such as Harvard Graphics and PowerPoint.

- David Baron, Denise Caruso

Next: The first interactive media content review: Verbum Interactive

——————————-
CONTENT:
VERBUM INTERACTIVE
First CD-ROM periodical is a hopeful sign

With this issue, Digital Media will begin a review section specifically for content, the foundation upon which the larger world of digital media will be built.

The time is right. Over the past couple of years, only a small number of products combining digitized graphics and video, text, sound and interactivity have trickled into the computer-based multimedia world.

It’s simply not possible to examine such titles in the same way computer software, or even film and music, have been examined in the past. As important as the content itself is its presentation — the interplay of media, production values, user interface, and raw hardware and software performance raises issues not previously considered.

To develop what amounts to a new literacy for digital media, the industry needs to bring a fresh perspective and artistic sensibility to the subject of content. We hope to contribute to that process.

A FITTING LAUNCH: VERBUM INTERACTIVE
The new two-disc CD-ROM magazine from Verbum Inc., publisher of the computer-art journal Verbum Magazine, is a fitting launch for this new section. The first periodical of its kind, Verbum Interactive (or VI) is an ambitious attempt to corral all the various media — video, audio, graphics and text — into a useful and entertaining format.

Plan to spend at least five hours playing around with Verbum Interactive. There’s more than enough information and entertainment to hold you for at least that long, although because of the technical limitations of both MacroMind Director and CD-ROM, you may want to consider either a prescription for barbiturates or an advanced course in Zen meditation before you do so. (More on technical limitations later.)

Appropriate technology. The best overall feature of Verbum Interactive is its striking presentation of media-based information in a way that actually makes sense and utilizes the strong points of various media instead of being gratuitously whizzy.

Verbum Roundtable. For example, a regular column in Verbum Interactive will be the Verbum Roundtable — a collection of video one-on-ones with a “Tonight Show” feel. Issue 1.0 includes a lineup of multimedia notables. Each participant is separately interviewed on the same topic, but the screen design gives the illusion that they’re all sitting together. A user can choose either to see an overview or to watch the whole thing.

The digitized video doesn’t spin off the CD-ROM at anywhere near real-time speeds, but it’s interesting to note how little that matters. Good audio synchronization and quality make up for it, and together they deliver the sense of personal contact and immediacy of television at its best.

Show and tell. Another column, called “Secrets of the Universe Revealed,” was equally imaginative. In it, Verbum art director Jack Davis used still imagery and screen shots from Adobe’s Photoshop to show step-by-step how he designed a company’s logo. How-to columns of this type, as well as VI’s Demo section for products, will prove a powerful tool for showcasing multimedia tools and products. Such products are almost impossible to describe, but their potential becomes immediately apparent when you can see them perform.

The Verbum Gallery.
Verbum shows off its roots as a “journal of personal computer aesthetics” in Verbum Gallery. Included in Issue 1.0 are 13 multimedia “exhibits,” including nicely reproduced photography, with CD-quality narration and music, by musician Graham Nash; a beautiful (though somewhat inscrutable) “artitorial” by Barbara Mehlman and John O’Neill; a sampler (plus graphics) of music by Pauline Oliveros, D’Cuckoo and Chris Yavelow, among others; samples from Warner New Media’s interactive Mozart CD-ROM; a slide show about ecology synched to a Todd Rundgren song; and an inspiring selection of projects by first-time Director users from the Art Center College of Design in Pasadena, CA.

Technical performance in this section, however, was sketchy. The way users interacted with each exhibit changed from one to the next with little or no explanation. In the Todd Rundgren and Mozart sections, the music tracks (to be heard through headphones or speakers hooked to the CD-ROM drive) weren’t accessible, and no explanation about how to get to them was given on the screen. The documentation said the tracks were to “run from your hard disk,” but it didn’t say how that was to be done. To be most effective, there should be software on the CD-ROM that goes out to the hard disk and launches those tracks without human intervention.

Features. Though VI’s feature stories had potential, the slow access time of the CD-ROM combined with an inconsistent user interface made them cumbersome and difficult to follow.

But like “Secrets of the Universe” and the Demo section, some of the features made excellent use of interactive media. Hal Josephson’s piece on interactive marketing, for example, included functional, point-and-clickable examples of the genre from well-known multimedia producers such as Doris Mitsch of Clement Mok Designs and Robert May of Ikonic.

This section also highlighted one of the weakest points of VI’s interface: the way it presents text on-screen. Throughout the publication, the print is too small and isn’t broken up enough to be easily readable. Online information services long ago learned the trick of short “paragraphs” — often only a sentence long -to keep the reader from going into blur mode.

Technical limitations. Let’s get this one out of the way right now, since you’ll probably be reading it in every review of a CD-ROM product: CD-ROM is too slow. It’s horribly, annoyingly slow. Until it is improved, access time will be the foremost reason that people will refuse to buy products in this medium. Its sluggish response makes the term “interactive” almost laughable, since it’s nearly impossible to have anything even approaching real-time interaction with a drum application.

User interface. That said, as mentioned earlier, VI’s user interface has some fairly serious consistency problems that must be addressed. In addition, and this assumes Verbum actually wants people to use the disc more than once, there is no apparent, consistent or easy way to browse the disc for specific information or to cancel a selection once you’ve clicked on it. If at any time you use the standard Macintosh “cancel” convention — Command-Period — the whole application quits. This is a time-consuming (and in this case, unnecessarily annoying) process.

Sound off. Once you’re past the opening screen, there’s no way to adjust the volume. Most of VI’s sound comes through the Macintosh, not the CD player, and the Mac’s control panel isn’t available unless you’re Director-literate enough to know how to resurrect it. Audio controls should always be immediately accessible in real time, in case the phone rings or something in your environment changes that requires either increased or decreased volume.

Yes, you should buy it. Despite these drawbacks, this is a title to own. VI is a snapshot of the state of the art in interactive multimedia: it is a perfect example of how to enhance an already-existing genre — in this case, magazines — with digital technology. VI’s abundance of creativity and intelligent use of media excites the imagination; at the same time, the formidable technical limitations of today’s tools and hardware detracts noticeably from the experience. And it’s clear that multimedia designers are learning by leaps and bounds about what works and what doesn’t in user interface design. Examples of both are evident in VI.

Not for the masses. Though VI is a step in the right direction, it is certainly not multimedia for the masses. It requires a color Macintosh with a (still pricey) drum drive, a minimum 5 mb of random-access memory, and speakers and/or headphones for connection to the Macintosh and/or the drum drive. But consumer versions of the product for CD-I and CDTV are already in the planning stages, and I expect VI to provide great inspiration to those who’d like to see more such innovative products become a reality. The product is available now, and a quarterly subscription service is expected to begin in early 1992 for both Macintosh and Windows systems.

Denise Caruso