•EuroPaARC Explores ‘Media Spaces`
•An Open Road to the Information Age
•NREN: Building the Open Road
•I/O; The Scope of Kaleida
•IBM goes Ultimedia, Photo CD spec takes shape
•Radius snags Touchstone; MPC makes big New York splash

•FOCUS: EUROPARC EXPLORES ‘MEDIA SPACES’

Though most attention has been focused on the value of digital media’s potential to revolutionize the home and entertainment industries, equal potential exists for such media to transform the work environment.
During a recent visit we made to England, EuroPARC — the research arm for Rank Xerox, a subsidiary of Xerox Corp. — demonstrated how the use of multiple media has vast potential to revolutionize the way people in organizations work together. By use of what they call “media spaces,” EuroPARC researchers have created a collaborative work environment that obviates the separation of people by offices and even continents, using technologies that are available today.

•AN OPEN ROAD TO THE INFORMATION AGE

Last month, the U.S. Senate approved legislation to create a national high-speed computer network and to nearly double federal funding for high-performance computing research and development. The legislation, called the “High Performance Computing and National Research and Education Network Act of 1991,” will provide approximately $1 billion over five years to develop the network, dubbed the NREN, and support other research and development in high-performance computing.

•NREN: BUILDING THE OPEN ROAD

While tracking the progress of the NREN this summer, Mitchell Kapor — founder of Lotus Development Corp. and cofounder of the Electronic Frontier Foundation, or EFF — wrote a detailed accounting of how to prevent the NREN from solely serving the scientific and educational communities. He believes the NREN could demonstrate how a broadband network can be used to benefit the general public.
Along with a brief account of the legislation and its provisions, we provide an excerpt from his treatise. It provides an excellent foundation for discussions of the political and ethical implications of “civilizing” the wilds of the telecommunications frontier.

•I/O
Trip Hawkins, who has a neat new job as president of a new interactive media venture with Time Warner, outlines ten principles for a mass market.

•THE SCOPE OF KALEIDA
The ink is dry on Apple and IBM’s alliance, but the days of speculation are far from over.

•IBM GOES ULTIMEDIA
The first in its new line of computers sports a super-fast chip, CD-ROM XA, 16-bit audio and a media control panel.

•PHOTO CD SPEC TAKES SHAPE
The consumer technology from Eastman Kodak holds great promise for compact disc-based presentations, too.

•RADIUS SNAGS TOUCHSTONE
Apple Computer relinquishes eight video technologies, granting Radius exclusive development and licensing rights.

•MPC MAKES BIG NEW YORK SPLASH
Technical limitations seem irrelevant as more than 60 titles and development tools debut at the “MPC Event.”

• EVENTS

EUROPARC EXPLORES ‘MEDIA SPACES’
Audio-video network allows colleagues to work together effectively, naturally

Though much attention has been focused on the value of digital media’s potential to revolutionize the home and entertainment industries, equal potential exists for such media to transform the work environment as well.

During a recent visit we made to Cambridge, England, EuroPARC — the research arm for Rank Xerox, a subsidiary of Xerox Corp. — opened its doors to Digital Media to demonstrate how the use of multiple media (including analog audio and video) has vast potential to revolutionize the way people in organizations work together.

By use of what they call “media spaces,” EuroPARC researchers have created onsite a collaborative work environment that obviates the separation of people by offices and even continents, using technologies that are available today.

The social is the technological. What’s unusual about EuroPARC as a technology research facility is its heavy emphasis on human factors and the social aspects of work environments. In fact, EuroPARC is predominantly staffed with social scientists; only one-third of its 30 staff members are technologists. Even EuroPARC director Bob Anderson is a sociologist and ethnographer, not an engineer.

The benefits of such an approach are probably obvious to anyone who’s been subjected to “user-friendly” software designed by software engineers with no grounding in the study of human interaction. “Programmers’ intuitions about the way people want to interact are often wrong,” says Wendy Mackay, the research scientist who oversees EuroPARC’s “media spaces” efforts.

At EuroPARC, researchers look closely at work habits and how they intersect with computing environments. Folding the sociology of work into the technological tools used to accomplish it is called “Computer Supported Collaborative Work,” or CSCW. Supporting such collaboration takes many forms at EuroPARC, but all share the central goal of creating “environmental interfaces to the information world” — and allow colleagues in locales ranging from the next office to the next continent to work together effectively and naturally.

These interfaces are designed to use existing technologies to both work with and segue into new technologies. Projects in this area include computer interfaces for people who are working with paper, as well as video annotation, the tracking of collaborative design projects and the aforementioned “media spaces.”

Supporting collaboration with technology

As we have all heard by now, statistics show that the massive onslaught of computers into the workplace has automated most jobs that used to be done manually, but has not increased worker productivity one whit. Despite the increased use of local area networks, very little software has been designed to increase the level and quality of real communication between workers. Electronic mail is still a theory to most organizations, even those in the computer industry, and “groupware” that facilitates collaboration, such as Lotus Notes, is only starting to gain a foothold in the corporate environment.

Groupware and electronic mail do increase direct, computer-to-computer or person-to-person (or person-to-people) communication. But even if Notes were installed on every network server in the land, and every organization became an intensive user of electronic mail, only part of the problem would be solved.

Ignoring a rich universe. Researchers at EuroPARC have found that the social setting in which computers are used is a critical factor in effective collaboration and communication, and the existing electronic aids to that communication do not address the rich universe that exists outside the computer. This includes not only information sitting on people’s desks in analog form (i.e., paper documents), but also what’s called “the periphery” — the conversations, either overheard or participated in, that take place in the halls or around the water cooler or coffee pot.

John Seely Brown, director of Xerox Corp.’s Palo Alto Research Center, calls this type of communication “serendipitous.” He believes that serendipity must be incorporated into systems design in order to achieve the true potential of computers as communications tools.

An extensive audio-visual network, RAVE, by far the most developed of EuroPARC’s projects, is designed to bring this serendipity into the work environment seamlessly and naturally. It connects the research center staff — based at Ravenscroft House in Cambridge — and selected sites at the Xerox PARC facility in California in fascinating, and surprisingly useful, ways.

THE RAVENSCROFT AUDIO VIDEO ENVIRONMENT (RAVE)

Ravenscroft House has 27 rooms and five open areas, or “commons,” on four floors. It’s not sprawling in the least, but its layout is such that staffers are surprisingly isolated from each other. In a classic example of “this is not a problem, it’s an opportunity,” researchers decided in 1987 to install a complete data, audio and video network to see if they could mitigate the effects of physical isolation. Each room in the building is transformed into an AV “node,” equipped with a video camera, a video monitor, a microphone and speakers. Individual workspaces are equipped with Unix workstations and Macintoshes. Each node is connected with video and audio cables to and from a central switch.

Connections among the nodes are controlled by computer, so individuals can display views from different nodes on their desktop monitors, as well as set up two-way audio-video connections. Two distinct audio links are used for notification of events and for voice communication. This network, dubbed “RAVE” for Ravenscroft Audio Video Environment, creates what EuroPARC calls the “media spaces” that staffers inhabit in conjunction with their physical workspace.

General awareness vs. focused collaboration. Most collaborative work in today’s organizations is centered around two or more people getting together -whether electronically, via groupware or e-mail, or face-to-face in meetings — to solve a problem. EuroPARC calls this “focused collaboration,” and this type of shared work is supported by most of EuroPARC’s work. What’s much more difficult, where people are physically separated, is to create an environment of general awareness — who is around, what sorts of things they are doing, whether they’re busy, and the like.

The idea is to use media spaces to move fluidly between general awareness and focused collaboration in much the same way people do in physical space, and that’s what the RAVE network and software support.

RAVE: THE SOFTWARE

The RAVE software was originally written in Lisp by Paul Dourish, a computer scientist who specialized in artificial intelligence work at both the University of Edinburgh and the Edinburgh Concurrent Supercomputer Project. RAVE (now being ported to the C++ language) was designed to support a broad range of connections and interactions with the network, which EuroPARC calls “degrees of engagement.”

The components of RAVE. Five onscreen buttons allow network users to control various levels of engagement with the network. The lowest level is the “Background” button (see figure), which allows people to select a view from one of the common areas. At EuroPARC, anyone not actively engaged in other functions on the video network usually sets up his or her default video connection to the main commons, which also serves as a group meeting and presentation site.

“Lots of people use it to see whether there’s coffee in the pot,” says Mackay. And sure enough, though the video quality is fairly grainy, a viewer can most certainly see if coffee needs to be made. Perhaps more practically, it’s not necessary to walk to the commons to see if a meeting is about to begin or whether a specific person has arrived. The result is that network users can maintain a general physical awareness not only of their immediate surroundings, but of remote areas that they’re concerned with as well.

“Sweep” is a time-shortened version of Background; it’s a one-second, one-way, video-only connection to all nodes or any nodes the user selects. It is also used to see who’s around and what they’re doing.

“Glance” is a more focused one-way video connection to a single connected node, also brief (about three seconds). Since both Sweep and Glance are brief and one-way, the effect is similar to walking by someone’s office and glancing in. Thus, general information about someone’s presence and activities is gleaned without jeopardizing their privacy. (More on privacy concerns later.) In fact, says Mackay, permission to Glance is granted by the receiver in advance, as part of RAVE’s setup procedure. Anyone receiving a Glance request is first notified by an audio cue.

The “VPhone” (videophone) and “Office Share” buttons are where two-way audio and video connections are exploited more fully. A VPhone call, much like a regular telephone call, must be explicitly accepted by the recipient. Office Share connections are identical to VPhone calls, except that the connections are designed to last longer — hours, days, months — to simulate actually sharing an office space with a colleague.

At EuroPARC, for example, one Office Share connection is set up between Dourish and a colleague, Victoria Belotti. Although they’re located on separate floors, anyone walking by Dourish’s office can stick a head in the door and ask when he’ll be back and vice versa. Since the video image is relatively small and the sound can be controlled, says Mackay, people don’t feel it’s necessary to engage in conversation unless they want or need to.

WHAT ABOUT BIG BROTHER?

RAVE has been designed to address the kinds of privacy concerns that a video and audio network of this type naturally raises.

Godard, a “smart” AV switch. One basic software component of RAVE, called “IIIF” (for “integrated, interactive intermedia facility”), was designed by former EuroPARC researcher Tom Milligan specifically to give users a degree of privacy and control over who may access their video and audio connections. IIIF’s original function was to link AV devices and “plugs” on the network — i.e., video and audio ports — to form point-to-point connections. Each plug and device is “owned” by its user; thus each user can control how others may connect to them. For example, Mackay could select which users on the network she wants to have access to her “video out” plug, which allows them to view the output of the video camera at her workstation.

But RAVE participants found that using IIIF alone was awkward — simply controlling their “plugs” made it difficult to allow a Glance, for example, but not a VPhone. More useful, they said, would be to control access around the network services themselves.

So Dourish added to IIIF a new layer called Godard, which uses IIIF’s underlying control mechanism to organize users’ connections by the function they want to use. Instead of shutting out or including other users on a global basis, RAVE users can now set up a “Glance control panel” or a “VPhone control panel” or an “Office Share control panel.” Each control panel allows them to select in advance who has permission to access their AV devices for each separate function.

For example, with Godard, Mackay can tell the software, “Only establish an ‘Office Share’ link between me and my assistant, but the following 12 people can ‘Glance’ into my office.” She can accept VPhone calls, like telephone calls, at the time of the call.

Notification via audio. Another useful function Godard performs is audio feedback to network users, telling them that a connection is being made and what kind of connection it is.

Different sounds are assigned to the different network functions. When a Glance connection is made to a camera, for example, Godard triggers a sound (the default setting is the sound of a door opening) before the connection is actually made. When the connection is broken, you hear the door close. VPhone requests might be signaled by a knock or a telephone bell, and a Sweep might be indicated by footsteps.

Though at first using auditory “icons,” so to speak, might seem a little silly, they actually provide much useful information. Sound cues linked to the different system functions of RAVE don’t require focused attention from the receiver, while they provide intuitive, nonintrusive information about what’s going on.

Godard’s sound cues are reminiscent of the old Sonic Finder (for good reason — EuroPARC’s William Gaver, who designed the Sonic Finder, also designed the audio features of Godard). The Sonic Finder, which never quite made it out of the labs at Apple Computer, gave the same kinds of auditory cues about what functions Macintosh users were performing on the Mac’s desktop.

KEEPING TRACK WITH KHRONIKA

The RAVE system also contains a rich distributed environment for “event notification” and selective awareness of events in the work environment. Though EuroPARC researchers say the Khronika system is a cousin to online calendar systems, it supports a more general notion of events — it triggers reminders to users about everything from when a video connection has been made to meetings about to begin to information about visitors. Users can both browse the database and add to it at will.

Khronika allows a user, for example, to create a type of software agent that watches for all seminar events occurring in the conference room with the string “RAVE” as part of their description. He or she can then instruct the agent to send an audio notification five minutes before the relevant meetings will begin.

ERASING THE BOUNDARIES

Though most of the RAVE network’s video is used in its analog form, one system feature captures low-resolution digital video images for network distribution. Called Portholes, its purpose is to promote collaboration between distributed work groups.

Taking a look around the world. Codeveloped by EuroPARC’s Dourish and Sara Bly at Xerox PARC, the Portholes system links certain individual workstations at the Cambridge and Palo Alto research facilities. It captures, displays and updates a number of digital video images automatically, i.e., without having to initiate the Sweep function. Researchers are finding that these remote connections, despite their low quality, help network users both establish psychological connection with users in remote locations and ease the problems of communicating with them.

The precursor to Portholes is an earlier prototype called Polyscope, which only distributes digitized images locally. But unlike Portholes, when an image is selected in Polyscope, it automatically interfaces to the AV network, allowing Glance or VPhone connections to be made. This feature will be incorporated into Portholes, which today only displays information about the selected image and gives the option of sending e-mail to that person.

WHAT’S THE DOWNSIDE?

Initially, EuroPARC’s research appears to prove that an intelligently designed, decentralized audio-video network could provide immediate and obvious benefits to organizations that need or want to increase their communications bandwidth. In the Seybold organization itself, for example, such a system would ease many of the problems inherent in running a seminars-and-publishing operation from three coasts — eastern and western U.S., as well as the United Kingdom — in four separate offices.

In such a situation, making even a simple electronic mail network useful is often an exercise in futility. The ability to make immediate visual connections in a variety of ways with coworkers in different cities and time zones could make a huge difference in the way business is conducted.

Gating issues. However, there are a few problems with the RAVE approach, none of which has any direct bearing on EuroPARC’s implementation.

One immediate question is that of cost. Mackay estimates that setting up the audio and visual nodes of the RAVE system, including the Xerox PARC connection, cost approximately $1,000, plus video and data lines and codecs.

Cost aside, any organization intending to implement a similar network is likely to encounter heavy internal opposition from workers concerned with invasion of privacy. It will be very difficult for people to become accustomed to the possibility of being under surveillance, even if they are assured that “they control the horizontal, they control the vertical,” in the spirit of the old TV show “Outer Limits.”

However, this is likely to be a familiar scenario in any connected office of the future. Groupware has already raised these concerns; live video and audio will only exacerbate it. Mackay says that the EuroPARC team is already acutely aware of these drawbacks, but that the atmosphere of mutual trust and cooperation, in addition to the extensive amount of user control that RAVE provides, makes it possible for the system to work. (Unfortunately, both trust and cooperation are in short supply in many corporations.)

In addition, she says, it’s important to remember that RAVE is a distributed system, where no one person can throw a big switch and peer into everyone’s offices on a global scale. “Big Brother is about centralized control,” she says. “Sure, all the paranoid stuff could happen. But we’re interested in hooking people together, especially when they’re working in isolation. Rather than being paranoid, our attitude is, ‘The technology exists, it’s here — let’s use it to solve communication breakdowns in groups.'”

Perhaps the most significant gating issue to the adaptation of a RAVE-like system is the fact that it is deep in the bowels of a Xerox research lab. Though EuroPARC is working with British Telecom to install media spaces in two of its physically separated engineering sites, Mackay says she’s aware of no plans to commercialize the RAVE software. (The hardware is industry-standard stuff, readily available on the open market.)

If past experience with the Macintosh (and the many other wonders that have sprung from the loins of Xerox’s labs) is any indication, getting a RAVE system into commercial use will require immense, long-term effort from a very dedicated soul. It would be a great pleasure not to have to wait so long.

Denise Caruso

THE EVA ANNOTATOR

The ability to “mark up,” or annotate, video in the same way that copy-editing programs allow the markup of textual documents could be an extraordinarily useful tool for companies conducting product research or focus groups.

And guess what? It already exists.

Calling the Muse. In 1986, EuroPARC researcher Wendy Mackay, then on staff at the Massachusetts Institute of Technology’s Project Athena, wrote a piece of software called EVA, an Experimental Video Annotator.

(Project Athena was a major campus computing experiment, sponsored by IBM and Digital Equipment, that set out to build a large-scale, vendor-independent, distributed workstation environment; it was the birthplace of, among other things, the X Window system, and was recently taken out of the “experimental” phase and placed under the umbrella of MIT’s Center for Educational Computing Initiatives.)

Written in Athena Muse, an authoring environment for creating time-based interactive multimedia applications (developed on campus by the Athena Visual Computing Group), EVA connects a video source — either live or prerecorded tape — to a computer and permits researchers to annotate the video in real time. It was designed from the ground up to operate on distributed networks.

Candid camera. If, for example, researchers wanted to observe someone using a new software package, they would sit at workstations while live video -from a camera displayed at the subject’s face — was displayed on the screen in one window. Another window would display the subject’s screen; an additional window would be available for text annotations typed in by the researcher at various points in the viewing process.

It is also possible to tie the subject’s computer into the system and view a keystroke log. Muse synchronizes all these functions, so a keystroke log can be viewed at the same time as the subject’s facial expression and any annotations made by the researcher.

The EVA software provides some tagging controls that are always available for use throughout the session. The only default control is a time-stamp button, pressed whenever an interesting event occurs that the researcher wants to be able to reference later. Other buttons are custom-built to tag such things as keystrokes patterns, visual images (single-frame, from the video), patterns of text transcribed from the audio track, clock times or frame numbers.

Though Mackay says she’ll be working on revamping EVA as part of her charter at EuroPARC, she’s “surprised no one’s commercialized it yet. It’s been out there for five years.” Is that an invitation?

Denise Caruso

U.S. MOVES INTO THE INFORMATION AGE
Senate approves funding to develop high-speed network

Last month, the U.S. Senate unanimously approved legislation to create a national high-speed computer network and to nearly double federal funding for high performance computing research and development.

The “High Performance Computing and National Research and Education Network Act of 1991,” coordinated by the White House Office of Science and Technology Policy, will provide approximately $1 billion over five years to develop the network, dubbed the NREN, and to support other research and development in high-performance computing.

A million computers. The program includes hardware, software, networking, education and basic research. It will link more than a million computers at more than a thousand locations in all 50 states.

Members of the House of Representatives and Senate will now meet in Conference Committee to resolve minor differences between the House and Senate versions of the bill. That committee report will then be sent to both House and Senate for approval.

After final approval, several federal agencies will receive funding to develop the network, including the National Science Foundation, the National Institute for Standards and Technology, the Defense Advanced Research Projects Agency, the Environmental Protection Agency and the National Institutes of Health. Other direct beneficiaries of the program will include the Library of Congress, the U.S. Geological Survey and the Department of Agriculture.

AVOIDING ELITISM

The Act’s passage. Albert Gore, the Democratic senator from Tennessee, gets most of the credit for the bill’s passage. A long-term and vocal proponent of a coordinated federal research plan to develop such a network and accelerate the application of high-performance computing, Gore is convinced the NREN will help U.S. industry regain its competitiveness in the global market.

Though widely acknowledged as a good idea, the NREN has also been a cause for concern among some members of the telecommunications community. Despite its vast potential to build a new communications infrastructure in the U.S., as important to the Information Age as highways and railroads were to the Industrial Revolution, they fear that the NREN might remain a haven for an elite group of scientists and researchers. They want the network to serve as a starting point to build a high-speed national public network that will jump-start a new industry based on consumer telecommunication services. (See the following story.)

Complete text of the act is available from Sen. Gore’s office, (202) 224-4944.

Denise Caruso

BUILDING THE OPEN ROAD
The NREN as test-bed for a National Public Network

Mitchell D. Kapor is familiar to most readers as the founder of one of the PC industry’s most successful startups: Lotus Development Corp. Most recently, Kapor has been making waves with an equally high-profile startup: the Electronic Frontier Foundation. A co-founder of EFF, he is deeply involved in the political and ethical discussions about civilizing the “electronic frontier” of telecommunications.

While tracking the progress of the High Performance Computing and National Research and Education Network Act of 1991 this summer, Kapor wrote a detailed accounting of how to prevent the NREN from solely serving the scientific and educational communities. Instead, he believes the NREN could demonstrate how a broadband network can be used to benefit the general public.

The piece below is only a small excerpt from Kapor’s excellent treatise. A complete version is available by contacting Kapor via the Internet, [email protected], or by writing to the EFF office at 155 Second Street, Cambridge, MA 02141.

A debate has begun about the future of America’s communications infrastructure. At stake is the future of the web of information links organically evolving from computer and telephone systems. By the end of the next decade, these links will connect nearly all homes and businesses in the U.S. They will serve as the main channels for commerce, learning, education and entertainment in our society.

The new information infrastructure will not be created in a single step: neither by a massive infusion of public funds, nor with the private capital of a few tycoons, such as those who built the railroads. Rather the national, public broadband digital network will emerge from the “convergence” of the public telephone network, the cable television distribution system and other networks such as the Internet.

The United States Congress is now taking a critical step toward what I call the National Public Network (NPN), with its authorization of the National Research and Education Network (NREN, pronounced “en-ren”). Not only will the NREN meet the computer and communication needs of scientists, researchers, and educators, but also, if properly implemented, it could demonstrate how a broadband network can be used in the future.

NREN AS PROTOTYPE

Far from evolving into the whole of the National Public Network itself, the NREN is best thought of as a prototype for the NPN, which will emerge over time from the phone system, cable television, and many computer networks. But the NREN is a growth site which, unlike privately controlled systems, can be consciously shaped to meet public needs.

The NREN design and construction process is complex and will have significant effects on future communications infrastructure design. It has frequently been described as akin to building a house, with various layers of the network architecture compared to parts of the house. In an expanded view of this analogy, planning the NII (national information infrastructure) is like designing a large, urban city.

The NREN is a big new subdivision on the edge of the metropolis, reserved for researchers and educators. It is going to be built first and is going to look lonely out there in the middle of the pasture for a while. But the city will grow up around it in time, and as construction proceeds, the misadventures encountered in the NREN subdivision will not have to be repeated in others. And there will be many house designs, not just those the NREN families are comfortable with. The lessons we learn today in building the NREN will be used tomorrow in building the NII.

The coming implementation and design of the NREN offers us a critical opportunity to shape a small but important part of the National Public Network.

Visions of social benefits. At its best, the National Public Network would be the source of immense social benefits. As a means of increasing social cohesiveness, while retaining the diversity that is an American strength, the network could help revitalize this country’s business and culture. As Sen. Albert Gore has said, the new national network that is emerging is one of the “smokestack industries of the information age.” It will increase the amount of individual participation in common enterprise and politics. It could also galvanize a new set of relationships — business and personal — between Americans and the rest of the world.

The names and particular visions of the emerging information infrastructure vary from one observer to another. Sen. Gore calls it the “National Information Superhighway.” Prof. Michael Dertouzos of the Massachusetts Institute of Technology’s Laboratory for Computer Science imagines a “National Information Infrastructure [which] … would be a common resource of computer-communications services, as easy to use and as important as the telephone network, the electric power grid, and the interstate highways.”

I call it the National Public Network (NPN), in recognition of the vital role information technology has come to play in public life and all that it has to offer, if designed with the public good in mind.

HOW WILL WE USE A NATIONAL PUBLIC NETWORK?

To what uses can we reasonably expect people to put a National Public Network? We don’t know. Indeed, we probably can’t know — the users of the network will surprise us. That’s exactly what happened in the early days of the personal computer industry, when the first spreadsheet program, VisiCalc, spurred sales of the Apple II computer. Apple founders Steve Jobs and Steve Wozniak did not design the spreadsheet; they did not even conceive of it. They created a platform which allowed someone else to bring the spreadsheet into being, and all the parties profited as a result, including the users.

Based on today’s systems, however, we can make a few educated guesses about the National Public Network. We know that, like the telephone, it will serve both business and recreation needs, as well as offering crucial community services. Messaging will be popular: time and time again, on systems ranging from the ARPAnet to Prodigy, people have surprised network planners with their eagerness to exchange mail. “Mail” will not just mean voice and text, but also pictures and video — no doubt with many new variations. One might imagine two people poring over a manuscript from opposite ends of the country, marking it up simultaneously and seeing each other’s markings appear on the screen.

We know from past demand on the Internet and commercial personal computer networks that the network will be used for electronic assembly — virtual town halls, village greens, and coffee houses, again taking place not just through shared text (as in today’s computer networks), but with multimedia transmissions, including images, voice, and video. Unlike the telephone, this network will also be a publications medium, distributing electronic newsletters, video clips, and interpreted reports.

OPENING THE FLOODGATES

We can speculate but cannot be sure about novel uses of the network. An information marketplace will include electronic invoicing, billing, listing, brokering, advertising, comparison-shopping, and matchmaking of various kinds. “Video on demand” will not just mean ordering current movies, as if they were spooling down from the local videotape store, but opening floodgates to vast new amounts of independent work, with high quality thanks to plummeting prices of professional-quality desktop video editors. Customers will grow used to dialing up two-minute demos of homemade videos before ordering the full program and storing it on their own blank tape.

There will be other important uses of the network as a simulation medium for experiences which are impossible to obtain in the mundane world. If scientists want to explore the surface of a molecule, they’ll do it in simulated form, using wrap-around three-dimensional animated graphics that create a convincing illusion of being in a physical place. This visualization of objects from molecules to galaxies is already becoming an extraordinarily powerful scientific tool. Networks will amplify this power to the point that these simulation tools take their place as fundamental scientific apparatus alongside microscopes and telescopes. Less exotically, a consumer or student might walk around the inside of a working internal combustion engine — without getting burned.

Building communities. Perhaps the most significant change the National Public Network will afford us is a new mode of building communities — as the telephone, radio, and television did. People often think of electronic “communities” as far-flung communities of interest between followers of a particular discipline. But we are learning, through examples like the PEN system in Santa Monica and the Old Colorado City system in Colorado Springs, that digital media can serve as a local nexus, an evanescent meeting-ground, that adds levels of texture to relationships between people in a particular locale.

To both local and long-distance communities, accessible digital communications will be increasingly important; by the end of this decade, the “body politic,” the “body social,” and the “body commercial” of this country will depend on a nervous system of fiber-optic lines and computer switches.

But whatever details of the vision and names given to the final product, a network that is responsive to a wide spectrum of human needs will not evolve by default. Just as it is necessary for an architect to know how to make a home suitable for human habitation, it is necessary to consider how humans will actually use the network in order to design it.

SOME RECOMMENDATIONS

In that spirit, I offer a set of recommendations for the evolution of the National Public Network. I first encountered many of the fundamental ideas underlying these proposals in the computer networking community. Some of these recommendations address immediate concerns; others are more long-term. There is a focus on the role of public access and commercial experiments in the NREN, which complement its research and education mission.

The recommendations are organized here according to the main needs which they will serve: first, ensuring that the design and use of the network remains open to diversity; second, safeguarding the freedom of users. The ultimate goal is to develop a habitable, usable and sustainable system — a nation of electronic neighborhoods that people will feel comfortable living within.

I. ENCOURAGE COMPETITION AMONG CARRIERS

In the context of the NREN, act now to create a level and competitive playing field for private network carriers (whether for-profit or not-for-profit) to compete. Do not give a monopoly to any carrier. The growing network must be a site where competitive energy produces innovation for the public benefit, not the refuge of monopolists. The greatest danger is “balkanization,” in which the net is broken up into islands, each developing separately, without enough interconnecting bridges to satisfy users’ desires for universal connectivity.

Strong interoperability requirements and adherence to standards must be built into the design of the NREN from the outset. For example, the National Science Foundation could make funding to NREN backbone carriers contingent on participation in an internetwork exchange agreement that would serve as a framework for a standards-based environment. As the NREN is implemented, some formal affirmation of fair access is needed — ideally by an “Internet Exchange Association” formed to settle common rules and standards.

II. CREATE AN OPEN PLATFORM FOR INNOVATION

Encourage information entrepreneurship through an open architecture (non-proprietary) platform, with low barriers to entry for information providers.

In the design of the NREN, information entrepreneurship can best be promoted by building with open standards, and by making the network attractive to as many service providers and developers as possible. The standards adopted must meet the needs of a broad range of users, not just narrow needs of the mission agencies that are responsible for overseeing the early stages of the NREN.

Policies for the NPN should therefore not only accommodate existing information industry interests, but anticipate and promote the next generation of entrepreneurs. It should be as easy to provide an information service as to order a business telephone.

No discrimination. Large and small information providers will probably coexist as they do in book publishing, where the players range from multi-billion-dollar international conglomerates to firms whose head office is a kitchen table. They can coexist because everyone has access to production and distribution facilities — printing presses, typography, and the U.S. mails and delivery services — on a nondiscriminatory basis. No one can guarantee when an application as useful as the spreadsheet will emerge for the NPN (as it did for personal computers), but open architecture is the best way for it to happen and let it spread when it does.

Just before the NREN bill was passed by Congress, under pressure from the D.C. Court of Appeals, Judge Harold Greene lifted the information services restrictions on the Regional Bell Operating Companies (RBOCs) imposed during the divestiture of AT&T in 1982 — despite the competitive tension among the telephone companies, cable TV carriers, and newspapers.

With all of the uncertainty that surrounds the RBOCs’ entry into the information services market, we should use the NREN to learn how to develop a network environment where competitive entry is easy enough that the RBOCs’ opportunity to engage in anti-competitive behavior would be minimized. Since the NREN standards and procedures can be designed away from the dominance of the RBOCs, a fully open network design is within reach. In this sense, the NREN can be a test-bed for “safeguards” against market abuse just as it is a test ground for new technical standards and innovative network applications.

III. ENCOURAGE PRICING FOR UNIVERSAL ACCESS

Congressman Edward Markey, Chairman of the Subcommittee of Telecommunications and Finance of the House Energy and Commerce Committee, warns that as information services proliferate, “the challenge before us is how to make them available swiftly to the largest number of Americans at costs which don’t divide the society into information haves and havenots and in a manner which does not compromise our adherence to the long-cherished principles of diversity, competition and common carriage.”

To address this problem in the long term, Sen. Conrad Burns has proposed that the universal service guarantee statement in the Communications Act of 1934 be amended to include access to “a nation-wide, advanced, interactive, interoperable, broadband communications system available to all people, businesses, services, organizations, and households … .”

In the near term, the NREN can serve as a laboratory for testing a variety of pricing and access schemes in order to determine how best to bring basic network services to large numbers of users. The NREN platform should facilitate the offering of fee-based services for individuals.

Cable TV is one good model: joining a service requires an investment of $100 for a TV set, which 99% of households already own, about $50 for a cable hookup, and perhaps $15 per month in basic service. Similarly, a carrier providing connection to the mature National Public Network might charge a one-time startup fee and then a low fixed monthly rate for access to basic services, which would include a voice telephone capability.

Open architecture could help phone companies offer lower rates for digital services. If opportunities and incentives exist for information entrepreneurs, they will create the services which will stimulate demand, increase volume, and create more revenue-generating traffic for the carriers. In a competitive market, with higher volumes, lower prices follow.

IV. MAKE THE NETWORK SIMPLE TO USE

The ideal means of accessing the NPN will not be a personal computer as we know it today, but a much simpler, streamlined information appliance — a hybrid of the telephone and the computer.

“Transparency” is the Holy Grail of software designers. When a program is perfectly transparent, people forget about the fact that they are using a computer. The mechanics of the program no longer intrude on their thoughts. The most successful computer programs are nearly always transparent: a spreadsheet, for instance, is as self-evident as a ledger page.

Personal computer communications, by contrast, are practically opaque. Users must be aware of baud rates, parity, duplex, and file transfer protocols — all of which a reasonably well-designed network could handle for them. It’s as if, every time you wanted to drive to the store, you had to open up the hood and adjust the sparkplugs. On a National Public Network, this invites failure. People without the time to invest in learning arcane commands would simply not participate. The network would become needlessly exclusionary.

Part of the NREN goal of “expand[ing] the number of researchers, educators, and students with … access to high performance computing resources” is to make all network applications easy to use. As the experience of the personal computer industry has shown, the only way to bring information resources to large numbers of people is with simple, easy-to-learn tools. The NREN can be a place where various approaches to user-friendly networks are tested and evaluated.

V. DEVELOP STANDARDS OF INFORMATION PRESENTATION

The National Public Network will need an integrated suite of high-level standards for the exchange of richly formatted and structured information, whether as text, graphics, sound, or moving images. Use the NREN as a test-bed for a variety of information presentation and exchange standards on the road towards an internationally accepted set of standards for the National Public Network.

Congress has provided that the National Institute of Standards and Technology “shall adopt standards and guidelines … for the interoperability of high-performance computers in networks and for common user interfaces to systems.” As the implementation of the NREN moves forward, we must ensure that standards development remains both a public and private priority. Failure to make a commitment to an environment with robust standards, said D. Allan Bromley, director of the Office of Science and Technology Policy, would be “the beginning of a Tower of Babel that we can ill afford.”

VI. PROMOTE FIRST AMENDMENT FREE EXPRESSION BY AFFIRMING THE PRINCIPLES OF COMMON CARRIAGE

In a society which relies more and more on electronic communications media as its primary conduit for expression, full support for First Amendment values requires extension of the common carrier principle to all of these new media.
Common carriers are companies which provide conduit services for the general public. They include railroads, trucking companies, and airlines as well as telecommunications firms. A communications common carrier, such as a telephone company, is required to provide its services on a non-discriminatory basis. A telephone company has no liability for the content of a phone call. Neither can it arbitrarily deny service to anyone.

The common carrier’s duties have evolved over hundreds of years in the common law and later statutory provisions. The carriers of the NREN and the National Public Network, whether telephone companies, cable television companies, or other firms, should be treated in a similar fashion.

Given Congress’s plan to build the NREN with services from privately owned carriers, a legislatively imposed duty of common carriage for NREN carriers is necessary to protect free expression effectively. As Professor Eli Noam, a former New York State Public Utility Commissioner, explains, “Common carriage is the practical analog to [the] First Amendment for electronic speech over privately owned networks, where the First Amendment does not necessarily govern directly.”

The controlled environment of the NREN should be taken advantage of to experiment with various open access, common carriage rules and enforcement mechanisms to seek regulatory alternatives other than what has evolved in the public telephone system.

New publishing opportunities. Along with promoting free expression, common carriage rules are important for ensuring a competitive market in information services on the National Public Network. The same advances in computing which created desktop publishing are delivering “desktop video” which will make it affordable for the smallest business, agency, or group to create video consumables. The NPN must incorporate a distribution system of individual choice for the video explosion.

If the cable company wants to offer a package of program channels, it should be free to do so. But so should anyone else. There will continue to be major demand for mass-market video entertainment, but the vision of the NPN should not be limited to this form of content. Anyone who wishes to offer services to the public should be guaranteed access over the same fiber-optic cable under the principle of common carriage. From this access will come the entrepreneurial innovation, and this innovation will create the new forms of media that exploit the interactive, multimedia capabilities of the NPN.

VII. PROTECT PERSONAL PRIVACY

The infrastructure of the NPN should include mechanisms that support the privacy of information and communication. Building the NREN is an opportunity to test various data encryption schemes and study their effectiveness for a variety of communications needs.

Technologies have been developed over the past 20 years which allow people to safeguard their own privacy. One tool is public-key encryption, in which an “encoding” key is published freely, while the “decoder” is kept secret. People who wish to receive encrypted information give out their public key, which senders use to encrypt messages. Only the possessor of the private key has the ability to decipher the meaning.

The privacy of telephone conversations and electronic mail is already protected by the Electronic Communications Privacy Act. Legal guarantees are not enough, however. Although it is technically illegal to listen in on cellular telephone conversations, as a practical matter the law is unenforceable. Imported scanners capable of receiving all 850 cellular channels are widely available through the gray market.

Cellular telephone transmissions are carried on radio waves which travel through the open air. The ECPA provision which makes it illegal to eavesdrop on a cellular call is the wrong means to the right end. It sets a dangerous precedent in which, for the first time, citizens are denied the right to listen to open air transmissions. In this case, technology provides a better solution. Privacy protection would be greatly enhanced if public-key encryption technology were built into the entire range of digital devices, from telephones to computers. The best way to secure the privacy and confidentiality Americans say they want is through a combination of legal and technical methods.

As a system over which not only information but also money will be transferred, the National Public Network will have enormous potential for privacy abuse. Some of the dangers could be forestalled now by building in provisions for security from the beginning.

CONCLUSION

The chance to influence the shape of a new medium usually arrives when it is too late: when the medium is frozen in place. Today, because of the gradual evolution of the National Public Network, and the unusual awareness people have of its possibilities, there is a rare opportunity to shape this new medium in the public interest, without sacrificing diversity or financial return. As with personal computers, the public interest is also the route to maximum profitability for nearly all participants in the long run.

The major obstacle is obscurity: technical telecommunications issues are so complex that people don’t realize their importance to human and political relationships. But be this as it may, these issues are of paramount importance to the future of this society. Decisions and plans for the NPN are too crucial to be left to special interests. If we act now to be inclusive rather than exclusive in the design of the NPN, we can create an open and free electronic community in America. To fail to do so, and to lose this opportunity, would be tragic.

Mitchell D. Kapor

APPLE AND IBM INK MULTIMEDIA AGREEMENT
New company wants to set standards for industry use

One might be tempted to think, now that the ink is dry on Apple and IBM’s wide-ranging technology alliance, that the days of speculation are over. But think again.

Despite the media circus around the October 2nd announcement -live from San Francisco, and beamed by satellite around the globe — we don’t know a whole lot more than we did when the news first broke in July.

So, what’s the score? As one industry pundit quipped, “It could be good. It could be bad. It could be nothing.” That’s as good a way as any to organize an analysis of the two firms’ new technology alliance for multimedia. First, the facts as we know them.

THEY CALL IT KALEIDA

Kaleida (as in kaleidoscope) is the name of the newly formed joint venture between IBM and Apple for multimedia technologies and products.

At this point, it’s expected that Kaleida will eventually employ between 200 and 300 people in offices located somewhere in Silicon Valley. Michael Braun, vice president of multimedia for IBM, says the company will start off with 20 or so “hand-picked individuals,” as well as a board of directors that’s split 50-50 between IBM and Apple folks.

David Nagel, president of Apple’s Advanced Technology Group and acting general manager of its consumer products division, says that Kaleida will also select two board members from outside the company. Names of members of the board and employees have yet to be made public.

The stated intention behind Kaleida (it’s tempting to write it “Collida” — as in, “They’re gonna collida with Microsoft …”) is to set and promote common data formats, scripting languages and system extensions that support rich media types, such as video, sound, graphics and rich text. In other words, Kaleida will develop a platform-independent multimedia software architecture and license it to Apple, IBM and the industry at large.

IT COULD BE GOOD …

In theory, Kaleida could prove to be the shot in the arm that media-based technology needs to gain widespread acceptance in a developer community beleaguered by incompatibility between computers and multimedia devices.

The challenge and the opportunity. Nowhere is that incompatibility more evident than in comparing IBM’s and Apple’s multimedia offerings today. Apple’s multimedia focus, which has wavered at best over the past couple of years, has been mostly on creating the underlying tools to allow “roll your own” multimedia productions. IBM, on the other hand, has for years been selling specialized industrial applications for multimedia in training and point-of-sale markets.

The two companies have not a stick of technology in common — no hardware, no data formats, no device drivers, no compression algorithms, nothing. This presents a formidable challenge to Kaleida.

Including the present. Apple chairman John Sculley says that Kaleida’s charter is to be platform-independent and inclusive of operating systems already in use by both companies.

It’s still not clear exactly what will be delivered into joint venture, but Sculley says that to “jump-start” Kaleida’s effort, Apple will license QuickTime and other yet unnamed technologies on a nonexclusive basis to the new venture. (Apple will continue to develop and market QuickTime as an Apple product.)

IBM’s Braun says there’s “no reason” not to bring riff (Resource Interchange File Format) and mci (Media Control Interface), public domain standards codeveloped with Microsoft for data formats and device control, into Kaleida as well.

“We want to be sure that titles already created will still play,” says Sculley. “In the context of architectures, this would mean run-time environments for all existing platforms.” Although there is no direct connection between Kaleida and the other IBM-Apple joint venture for object-based operating systems, called Taligent, Sculley says the intention is to make sure that Kaleida’s charter will be as expansive as possible, and that it will be able to integrate its products easily with Taligent’s OBS (object-based systems) world. “We want to make sure that the bar for multimedia is not set too low.”

Adds Braun: “Kaleida has to address the problems of multimedia developers. To energize applications development, we have to reduce the time, cost and risk. That’s a real problem. And what they’ll be getting is data specs from the two preeminent companies in the market. Whatever they need, that’s what we want to do.”

A single scripting language. IBM and Apple also believe that a single, platform-independent scripting language — a multimedia version of the industry-standard PostScript page description language — could spur title development. “Today’s scripting languages all create platform-specific programs,” says IBM’s Braun. “We think we know how to do this in a way that if the tools — HyperCard, MacroMind, AVC, Authorware, etc. — used Kaleida’s scripting language, the output could run on multiple platforms.”

Braun doesn’t know if today’s already-scripted applications would have to be rewritten to take advantage of the new Kaleida language when it appears — and he doesn’t seem unduly concerned about it either. “Most of the business is in front of us, not behind us,” he says.

Hardware, too? But Kaleida’s charter doesn’t end with software architectures. Despite the promise of combining Kaleida’s multimedia architecture with Taligent’s scalable, object-based operating system and the Apple-IBM-Motorola RISC processor — a perfect setup for a powerful multimedia product line, from consumer player to high-end multimedia workstation — Apple’s Nagel says “You don’t have to wait for RISC technology to have a player.” And in any case, he adds, “Kaleida will not be married to a processor or architecture. It’s primarily a software company — it won’t develop hardware products per se — but it will be involved in developing hardware specifications.”

In other words, it’s quite likely that Kaleida will develop a specification for hardware alternatives to both the Multimedia PC (MPC) and Philips’s Compact Disc-Interactive (CD-I), due to ship this month.

“CD-I was maybe a necessary learning step for the industry, but it would have been a lot more successful in the 1980s than in the 1990s,” says Sculley. “A hardware-defined system is obsolete before it ships. The ’90s are software-defined. Kaleida is looking at an entire architecture to take us out at least 10 years, though its initial products will be more focused to near-term.”

No altruism here. Each in its own way, both Apple and IBM have been trying to make multimedia cheap and widely available on a much broader range of devices, and to remove the confusion about how it’s represented to customers. And both companies believe the multimedia devices of the future will be cheaper, as well as more “sophisticated and interesting,” than MPCs — including desktop machines, portables and consumer devices.

But despite these concentricities, says Apple’s Nagel, Kaleida isn’t about altruism, or about losing competitive edge. “We’re not a service organization,” he says of Apple. “We want to enable new products and to be profitable. As a product guy, [Kaleida] forces the pace of technology and innovation.”

IT COULD BE BAD …

It’s been said that the only thing worse than hate is indifference, and the greatest danger that the Apple-IBM alliance faces is that no one will care about it by the time products are ready to ship.

As we said in an earlier piece on the Apple-IBM multimedia agreement (see Vol. 1, No. 3), even if Kaleida comes off according to plan, almost every title developer in the business is either considering or already developing products to run under the Multimedia PC specification set forth by Microsoft and the MPC Marketing Council. Sun Microsystems is also working on promoting multimedia computing, as is Silicon Graphics with its new Indigo multimedia workstation.

Taking on the MPC. But Sculley is unfazed by the potential hazard. He says multimedia specifications based on today’s standard architectures fall short.

“It’s a lot more complicated than taking DOS and adding technology on top of it and calling it multimedia,” Sculley says, making an oblique reference to Microsoft’s MPC specification. “We have more multimedia technology than anyone and we know that’s not the way to do it. You need run-time versions of an operating system for consumer electronics devices.”

Speaking softly of the future. In addition to pointing out the shortcomings of present offerings from Microsoft and consumer companies such as Philips, IBM and Apple are quick to emphasize that whatever products either Kaleida or Taligent delivers will be compatible with both companies’ present offerings.

There’s at least one good reason for such emphasis. At this critical time in both companies’ hardware businesses — IBM losing market share, Apple gaining market share but losing profit by selling at lower cost — neither company wants to axe themselves out of the next three years of hardware and software sales and development by stressing the future too heavily.

Hence, it is vital that Kaleida deliver some standards earlier than two years from now to be able to capture any appreciable share of mind from developers. Considering the disparity between the companies’ product offerings today, this will be a thorny challenge.

“Pre-competitive cooperation”? Also, there are still some very real questions about whether it’s possible for two competitors to be linked so closely and still maintain their competitive edge — against either each other or the rest of the industry. At the October 2 announcement, every other phrase out of each speaker’s mouth addressed the competition question, which indicates that at least the perception of the problem exists.

“We intend to remain fiercely competitive,” went one version of the stage growl. “Don’t even think for one nanosecond that we’ll back off from competition,” was another presenter’s version. “We will continue to aggressively evolve our own products,” said yet another. Later, Sculley said privately, “There’s one thing I want to impress upon you. We intend to be everywhere.”

But let’s just say that Apple, for example, finds itself riding a wave of popularity at IBM’s expense. Then what happens in those Kaleida and Taligent board meetings, populated even-Steven by IBM and Apple executives? Is it still possible to “cooperate on a pre-competitive level,” as Sculley says? Where do good intentions for the future of the industry go when one company gains a clear lead over the other?

IT COULD BE NOTHING …

The benefits of cooperating at the “pre-competitive level” are clear, both for customers who are tired of trying to make square pegs fit into round holes and for vendors and developers trying to sell products into a world that increasingly requires coexistence.

Of course, if the Federal Trade Commission decides that these joint ventures between Apple and IBM constitute a restriction of free trade, we can forget any benefits coming from Kaleida or Taligent. Even with the FTC’s blessing, history does not make many good cases for the success of such agreements. (Recall Sun’s failed joint venture with AT&T to merge two versions of the Unix standard, a relative no-brainer with clear benefits to both parties, as one example.)

No matter how carefully a venture is crafted, something always can — and usually does — go wrong. How IBM and Apple handle those problems as they arise will decide the ultimate success of the venture.

A good case could be made that, considering the state of the two companies’ businesses today and Microsoft’s growing stranglehold on the industry, neither Apple nor IBM had any choice but to make a move so bold. As the comic philosopher Ashleigh Brilliant says, “I don’t have any solution, but I certainly admire the problem.”

Denise Caruso

IBM LAUNCHES NEW MULTI-MEDIA LINE
Many other platforms and titles debut in October, too

This is a big month for multimedia computing to strut its stuff. It seems that every week in October, and at every conference (and this month is full of them), one player or another has or will announce the product or products that will turn the world of multimedia on its ear.

Significant announcements are coming from every corner. Apple and IBM announced their joint venture for multimedia technologies (see story on page 14); Tandy Corp. has shipped its first Multimedia PCs; the MPC Marketing Council showcased the first crop of multimedia titles for MPC machines at a rollout in New York on October 8th (see story on page 22); and Philips is finally releasing Compact Disc-Interactive (CD-I) on October 18 at a new price point, lower than previously announced -just less than $800 retail.

No grass growing under IBM. Not to be outdone, IBM just launched six videodisc-based titles at a splashy party at the CD-ROM Expo in Washington D.C. (the long-awaited Columbus: Discovery, Encounter and Beyond from Robert Abel and Synapse Technology, and the Illuminated Books series of five titles, including the oft-showcased Ulysses, from AND Communications).

And at long last, after about a year of very broad hints in many public forums by Bob Carberry, assistant general manager of systems technology for IBM’s Personal Systems Group, IBM announced the first of its upcoming line of “Ultimedia” computers, starting with a media-capable PS/2 Model 57 that has all of the functionality of the MPC -without the logo.

Not an MPC. The new Ultimedia computer is a cached-memory version of IBM’s PS/2 Model 57 that allows a standard Intel 386SX processor to run at almost twice its 20-mhz clock speed. A memory cache buffers data and instructions in fast, on-chip memory so that the main processor doesn’t have to use slower random access memory as often.

The new computer, called the M57SLC, will cost less than $6,000 and has a built in CD-ROM XA drive (which means it interleaves audio, text and graphics, and can read standard CD-ROM and Photo CD discs; discs designed specifically for XA will also play in a CD-I unit), an 80-mb hard drive, a 16-bit audio card, XGA graphics and a “media control panel” on the front of the machine that includes a volume control and stereo output jacks.

Benign neglect. IBM has always hedged its bets when it came to the MPC. Even at the Microsoft Multimedia Developers Conference last November, when Microsoft announced a raft of manufacturers who’d agreed to produce computers that followed the basic specifications set down for the MPC, IBM vice president Mike Braun sat on stage with other hardware vendors building MPC computers and studiously avoided committing to manufacturing an MPC device.

IBM is continuing to exhibit this “benign neglect,” in the words of Blakeney, toward the MPC and the MPC Marketing Council. Peter Blakeney, manager of market programs for IBM’s Multimedia and Education division, agrees the MPC is a worthwhile concept for developers, forcing them to consider all of the media and marketing considerations before developing titles.

But IBM, he says, has committed to multimedia technologies on a much broader scale than the MPC delivers. It is one of the top six strategic initiatives for IBM in the 1990s and all of IBM’s hardware products, from personal computers to RISC machines, to AS/400 or Enterprise computers, will be capable of handling high-quality still images, sound and video.

Meeting the basic requirements. The MPC, says Blakeney, assures developers of a base-level system, but is inadequate for 90 percent of IBM’s customers. “The market needs different levels of capabilities [for different applications]. Four or five other ‘basic requirement levels’ need to be filled.”

IBM, therefore, has created its own brand name, called Ultimedia. The products marketed under this name may be PCs, RISC machines, AS/400 or Enterprise Products -providing multimedia “solutions” across the entire IBM product line. The company has also announced specialized customer services, such as Kiosk Solutions, to assist customers who are interested in implementing public access or merchandising applications.

It’s not yet clear which multimedia titles developed using the MPC specification will work with Ultimedia. According to Braun, the main differences between the MPC and the M57SLC are the audio specifications — MPC uses 8-bit audio, the M57SLC 16-bit — and the fact that the new computer does not include a joystick port. “For the business market, we don’t think a joystick is a heavy requirement,” he says.

DVI, a touch-screen and VGA. In addition to the new computer, IBM is also unveiling, with DVI development partner Intel Corp., a rejiggered DVI chip set and board called Action Media II that incorporates variable compression rates. IBM is also announcing new DVI digital video applications, including Person to Person, a desktop video conferencing package. (Intel, which manufactures the DVI chip set, recently ceded all application marketing for the technology to IBM.)

Another new product announcement connected to IBM’s multimedia strategy is the TouchSelect snap-on touch screen, which attaches to a standard monitor and delivers all of the functionality of a mouse by simply moving an object — a finger or a stylus — across the screen. Demand for the earlier version of IBM’s complete touch screen display system has outstripped IBM’s expectations, and Blakeney believes the new product will be similarly well accepted in the market.

Networked video, too. The final product announcement is PS/2 TV, a $500 hardware add-on “pancake” that sits atop the CPU and allows the display of video, either in a window or full-screen on a VGA monitor, while the computer is engaged in other applications. (Audio is supported as well.) This window can be accessed from any of the PC operating systems in use today, including dos, OS/2 or Windows. This allows video from cable, satellite or closed-circuit television to be broadcast on any individual’s computer.

A PS/2 TV option is a $99 device called the F-Coupler, which will allow an analog video signal to be distributed via a local area network. The F-Coupler allows the analog signal to ride atop a different part of the spectrum than the digital data without degrading the performance of the network.

David Baron

PHOTO CD SPEC TAKES SHAPE
XA disc a promising new medium for authoring and distribution

This month, Kodak Corp. and Philips will release the data structures and the file format specifications for Kodak’s Photo CD product line, which is expected to hit the market in June 1992.

Photo CD, you will recall, is the new film scanning and recording system developed by Kodak that allows some 100 35mm photos to be “printed” and distributed on a writable compact disc and displayed on a TV screen.
The fact that Kodak has worked closely with Philips to make Photo CD compatible with Philips’s Compact Disc-Interactive (CD-I) specification has been well documented. But what’s less well understood is that Photo CD is actually a CD-ROM XA disc.

This means a Photo CD disc not only plays in a CD-I player, but is capable of carrying interleaved audio, text and image data, as well as Photo CD picture data, to be accessed by a personal computer.

SOUNDS LIKE A NEW TITLES MEDIUM

Thus, what started out as a new and interesting consumer technology not only may become a powerful means of publishing photographic images for use in computer-based applications, but is very likely to become a new medium for creating and distributing multimedia presentations. In fact, it may even prove to be a powerful incentive for people to buy CD-ROM XA drives or upgrade their old CD-ROMS.

A bridge format. CD-ROM XA is an addendum to the Yellow Book, which details the physical formatting of CD-ROM discs. When the XA specification is used to lay data onto an optical disc, it allows audio, text and image data to be interleaved within tracks or sectors on a compact disc. Photo CD uses a bridge format that allows CD-ROM XA drives and CD-I players to read data from the same disc.

(A standard Yellow Book CD-ROM does not allow for interleaved data, so today’s CD-ROM drives cannot take advantage of the enhanced capabilities of CD-ROM XA. To do so, the user needs to have an XA-compatible drive.)

Writable CD. The Orange Book specification, developed and licensed by Sony and Philips, is what’s used to create these XA- and CD-I compatible discs. Using some formidable technological tricks, it allows developers to lay down digital data streams on a writable optical disc (sometimes known as “write-once”) — not a standard, prestamped compact disc — in such a way that the disc acts just like a standard XA or CD-I disc.

This is no mean feat. Standard optical discs can only be manufactured by a special mastering process that permanently encodes the data by stamping pits into the surface of the disc. Data is read by a laser that notes the transitions from a non-pitted area to a pitted one.

Writable discs are fundamentally different. They contain what’s called an “active layer” of dye, which absorbs laser light to change the reflectivity of the disc’s surface instead of changing the physical surface of the disc itself.

The technical win. Write-once technology makes it possible to write discs on-site, one disc at a time, instead of sending them off to be mastered and mass-produced. Until the Orange Book was written, the only way to read a writable disc was to both create it and play it on an expensive system designed specifically for that purpose.

The technical win with Orange Book was to make the laser’s reflections appear the same to the reader. In other words, they fool the laser into seeing the changes in reflectivity as actual physical surface variations, so that writable media can be read by equipment that was not designed to do so.

NOT JUST FOR CONSUMERS

Kodak is banking that the simple ability for consumers to “play” their 35mm photographs on their televisions, via its own Photo CD player, will be a mass consumer product in its own right. But as a result of Photo CD’s compatibility with CD-ROM XA and its compliance with the Orange Book standards, the company is positioned to make a significant impact in the digital media marketplace as well.

Conversion for CD-ROM XA users. Scott Brownstein, advanced development manager of Kodak’s CD Imaging Division, says all that’s necessary to make Photo CD operational in CD-ROM XA drives is some additional software, which Kodak is already developing. Although the software will certainly be available through retail channels, says Brownstein, the company is also considering bundling it with applications and/or CD-ROM XA hardware.

Kodak’s XA software development is following three tracks. One is an accessory that allows users to pull an image off a Photo CD and paste it into their existing applications (this was demonstrated on a Macintosh at the Seybold Computer Publishing conference earlier this month). Another is the creation of plug-in modules that allow Photo CD images to be imported into existing applications, such as Adobe’s Photoshop. The third, for independent software developers, is a toolkit that will enable vendors to include Photo CD as a new data type in upgraded versions of their applications.

Kodak is also experimenting with software that converts Photo CD images to dvi, the digital video format codeveloped by IBM and Intel. And since Kodak’s Photo CD photofinishing equipment is based on Sun Microsystems’ Sparcstation platform, a Unix accessory, module and toolkit are well under way.

CD-based presentations. Since Photo CD is XA-compatible, thus supporting interleaved audio and text, consumers will be able to add both titles and narration to the photos on their Photo CD discs. But Brownstein says he envisions a new class of applications where Photo CD is used to allow authoring real-time presentations delivered directly from compact disc.

There are two methods for putting various media types onto an XA disc such as Photo CD. One is to interleave them with the image data. This, Brownstein says, will likely be the method of choice for people working to create multimedia presentations.

The other method is to append them at the end of the disc, using a “pointer” system in Photo CD that tells the system to play a certain audio track, for example, with its related picture — the most likely method for consumers without XA drives, since interleaved audio isn’t supported by Kodak’s stand-alone Photo CD player.

The nitty-gritty details of how people will actually accomplish this are not yet clear, but Brownstein says a customer will be able to take analog cassettes, as well as digital images, text and audio tracks stored on diskettes, to a Photo CD photofinisher. Tracks from audio CDs can be used as well (with the proper permission, of course).

WHAT ABOUT COPYRIGHT?

The potential success of Photo CD also raises the specter of millions of digital images being set loose on a public that has little respect for, or understanding of, copyright law. Though anyone who snaps a shutter owns the image he or she creates, the ability to easily copy these high-quality “digital negatives” onto hard discs, combined with the growing hunger for high-quality images in the multimedia world, begs for some kind of encoding scheme on a Photo CD disc that cites image ownership.

They already thought of it. Brownstein says the Photo CD specification includes the ability to record ASCII data about individual images — such as the source or author of the image — onto the disc. “We also have the ability to encrypt the print- resolution data so you can’t print out a copy of the photograph without permission of the owner,” he says. “Content providers and the photofinisher can play games with how high-resolution info is stored on the disc, so you can’t print without knowing what game they used.”

But the spec doesn’t include the ability to encrypt the lower-resolution data that Photo CD uses to display images on a computer screen -another situation rife for illegal copying, especially in multimedia applications and consumer titles. (Photo CD stores separate, resolution-based information for printing to paper, computer displays and television standards, HDTV, 1/4-screen and 1/16-screen thumbnails.)

Copyright information can still be included and displayed with each image, regardless of resolution, but there’s no way to prevent someone from using an illegally obtained image in a screen-based title.

“There will be intellectual property issues here,” says Brownstein. “You have to protect your own rights. But the other side is that if I can’t get my images to you in the first place, you won’t pay for them.”

The bottom line for business. Photo CD cannot really be considered expensive even for consumers. A Photo CD player that attaches to the TV is expected to cost less than $500, and the cost to “print” a roll of 35mm film onto a Photo CD disc will be around $16. However, anyone who really wants to exploit Photo CD’s capabilities for business presentations will have to buy a (comparatively) expensive CD-ROM XA drive.

But businesses have historically been willing to spend much more money to buy the tools they need, and given Photo CD’s potential, its success in the commercial world is likely to skyrocket.

Denise Caruso

RADIUS SNAGS TOUCHSTONE TECHNOLOGY
Apple grants rights to license and manufacture video products

Technology swapping in the world of desktop video continued this month (last month we reported SuperMac’s sale of ReelTime to Adobe) with the announcement that Apple Computer granted exclusive rights to its patented Touchstone video technologies to Radius, Inc. of San Jose, CA. Terms of the deal were not announced.

THE TOUCHSTONE TECHNOLOGY

Touchstone is a combination of hardware and software which, when combined with Apple’s QuickTime system extensions, makes digital video more versatile and inexpensive to produce. It places significant emphasis on maintaining or increasing the quality of the video image as it is run through the image enhancement, compression, digitizing and resizing processes we take for granted with still graphics.

The eight Touchstone patents, which include some Apple-designed custom chips, cover three technologies: a new “HBus” architecture, scalable video windows and flicker-free 24-bit output to composite video. Touchstone technologies are not necessarily dedicated to one product, i.e., a single “super video” card, but will be used in a wide variety of products in different combinations over the coming months and years.

Today, users can pass video across the NuBus, or the Macintosh motherboard, and display it on the monitor using a video card. A video window of 640×480 lines at 30 frames per second is the limit of what the NuBus can handle. Therefore, to manipulate the video in any way — such as compressing it and storing it on a hard drive in real time — a user would be required to reduce the resolution, make the window smaller or cut the number of frames being displayed.

A video bus. Apple devised a new architecture called HBus, which in essence moves video traffic off NuBus. Additional processors or dedicated daughter boards can be connected through an HBus slot which will sit on the NuBus cards. Thus, high bandwidth video information can be processed much more quickly, without slowing concurrent operations of the computer.

According to Ben Jamison, product marketing manager for professional color systems at Radius, creating an open HBus slot instead of a dedicated, single application card will allow users to configure their machines for the particular needs of the application.

For example, one may want to add special effects, while another may need a video compression chip. In addition, different video applications may require different compression algorithms: a video teleconferencing application would require the Px64 algorithm (a telecommunications standard), while video postproduction applications may require MPEG compression. HBus’s functionality should allow vendors to address these needs separately and more efficiently.

Dynamic resizing without loss of quality. Touchstone’s second advancement is its display technology. With today’s displays, changing the size of a video window on a computer screen reduces image quality significantly because information is squeezed out, an effect called “decimation.” Thus, an image reduced to a thumbnail usually looks pretty bad. Touchstone uses a filtering process whereby the quality of the image is maintained, no matter how small the window (within reason, of course).

This is largely a developer tool that, when built into applications, will enable producers to choose window sizes with impunity. In addition, picture icons or moving icons (picons and micons) can easily be created. Users will likely also appreciate the ability to incorporate a video signal into their applications as they see fit (no pun intended).

24-Bit convolution. The third technology is called “24-bit convolution.” Convolution is a filtering and interpolation process in which digital video information is converted back to an analog signal for use in conventional video appliances, such as TVs and VCRs. In order to do this, one has to convert the noninterlaced, digital RGB signal to the interlaced signal of analog television.

Television systems in the United States alternately display two fields of information every 1/60 of a second. Each field displays every other scan line of data, odd or even (hence the term “interlaced”). Thus two fields of video, displayed at the correct speed, produce the image of a full frame of video every 1/30 of a second, or 30 frames per second.

The problem with computer data, however, is that computers “paint” the image from top to bottom, without interlacing scan lines. So any TV image displayed on a computer screen that is one scan line in width will appear to “flicker” as the fields alternate.

The Touchstone process “interpolates” these interlaced lines, causing them to appear solid on the monitor. In addition, it allows 24-bit output, so photorealistic images and smooth color shading and blending can be captured on video or displayed. This technology would enable users to pull a composite video signal (like the one that goes in and out of a standard VCR) straight from the computer. Computer-generated presentations could then be easily displayed or captured on tape for distribution.

Touchstone’s capabilities include real-time international video standards conversion. The hardware allows input and output to and from the NTSC video format (the U.S. and Japan) and pal (most of Europe), and input from SECAM (France and the U.S.S.R.).

A PRIMARY TOOL SET

Radius envisions Touchstone as the primary tool set for anyone using desktop video. And “anyone” includes other companies, which will be able to sell Radius products that incorporate Touchstone technologies under their own labels.

Barry James Folsom, Radius president and CEO, ultimately sees Touchstone as the “universal common denominator for multimedia developers” — the technology is being ported to other platforms, including Intel-based computers. This is a particular strong point for Radius, which has taken its expertise and innovation in the Macintosh video monitor market and developed revolutionary products for the normally staid PC monitor market, including Pivot and 19-inch VGA monitors.

Radius expects the first Touchstone product to be announced in the first quarter of 1992. Incorporating many of the Touchstone technologies, it is likely to sell for under $2,000. The company is building an entire Touchstone product line, as well as upgrades to existing products.

Braving the video world. The technologies themselves are significant for users of both digital and analog video. According to product manager Jamison, one reason the multimedia market has not taken off as some predicted is that video has proven itself an extremely difficult and expensive data type to deal with.
Braving the video world requires the purchase of expensive additional hardware, cabling and difficult software, as well as navigating a whole new problem set around international display standards, compression techniques and a myriad of product offerings.

In addition, Jamison believes the multimedia market splits between analog and digital video users. People producing program-length material ultimately go “out to tape.” That is, they produce videotapes or laserdiscs (analog media) with computer tools. Digital video users generally include shorter snippets of video as a way of enhancing computer presentations, such as supplementing a PowerPoint slide show with video, or a MacroMind Director animation with a video window. Jamison’s goal is “a solution that is just as comfortable in both situations.”

Why did Apple let it go? Certainly on-board video is in future plans of all computer manufacturers. Why wouldn’t Apple want to hoard this technology for its own use?

Apple spokesperson Patty Tulloch says the reason is that Apple is concentrating on system software and platform development, and has chosen to offload resource-intensive NuBus development projects to those third parties that have more incentive to bring them to market.

David Baron

MPC MAKES BIG NEW YORK SPLASH

More than 60 Multimedia PC (MPC) titles and development tools made their collective “debut” at the MPC Event hosted by the Multimedia PC Marketing Council in New York on October 8. Microsoft, plus a supporting cast of MPC vendors and developers of MPC titles, staged the presentation at the American Museum of Natural History.

The Event was designed to demonstrate that, less than a year after the birth of the MPC standard, significant progress has been made. MPC titles are numerous. MPC systems and upgrade kits are beginning to be released. MPC authoring software is available and powerful. And distribution channels for MPC titles and hardware have been established.

Working with the limitations. The MPC platform combines a graphical user interface (Microsoft Windows with Multimedia Extensions), three kinds of audio capability, VGA graphics and the CD-ROM delivery medium. MPC is not yet digital television or movies on disc. It is, however, interactive animation, color photographs, voice, music, sound effects, drawings and nonlinear (“hyper”) structure, added to conventional computer applications and text.

The titles and authoring tools demonstrated in New York are not all available yet. But by and large, all seemed to combine the MPC elements in effective ways while finding a way to work within the limitations of the MPC standard.

Although MPC-labeled titles must be able to run on a 10-mhz ‘286 PC and a VGA display monitor (usually with 16 shades of gray), most are clearly intended to be used in ‘386 or ‘486-based machines — often with VGA 640×480 graphics in 256 shades of color. The ‘286 requirement is almost irrelevant, according to most MPC developers.

Titles old and new. Many of the MPC titles are new versions of old DOS CD-ROMS with new graphics and audio components; Macintosh CD-ROMS ported to the MPC; or floppy-based games or other applications, enhanced by multimedia elements stored on CD-ROM.

Among the most impressive titles are interactive storybooks and games for children who cannot yet read or are learning to read. Using animation, voice and other audio elements, these include entries from Broderbund, Sierra On-Line, Voyager, EBook and Context Systems.

Reference works and language learning have always demonstrated the advantages of a multimedia computer. Adding color photos, detailed diagrams and maps, timelines, speech, music, animation and hypertext search to encyclopedias, atlases, travel guides, art and history books, dictionaries, thesauruses and even the Guinness Book of World Records makes consulting reference works more informative and fun.

Publishers of such products include Syracuse Learning Systems, Britannica Software, InterOptica Publishing, Maxwell Electronic Publishing, Microsoft, Software Toolworks and Warner New Media.

Computer game and simulation developers appreciate the enormous data density of CD-ROMS for multimedia information. One example is the popular SimCity by Maxis. Tiger Media and Sierra On-Line showed adventure and “parlor” games.

Because of the audio capabilities of the MPC, many music-related titles are being developed. Microsoft is distributing an MPC version of Voyager’s Multimedia Beethoven: The Ninth Symphony. Passport Design, TRAX, Midisoft and Opcode also showed music editors for the MPC.

Periodicals, authoring tools, clip libraries. Three periodicals are slated to be published in the MPC format: Nautilus for the MPC, Verbum Interactive and Windows Information Manager MM.

More than a dozen authoring tools for the MPC were in evidence in New York, including products from AimTech, Autodesk, Authorware, MacroMind, Owl, Gold Disk and Knowledge Garden. Clip libraries of MPC art work, music and sound effects were shown by many, including Applied Optical Media, Corel, the Hyper Media Group, Prosonus and Killer Tracks.

Is MPC a winner? In addition to the variety of depth of titles and tools, distribution was also highlighted. Babbages, Ingram Micro, Merisel and other resellers announced that they will distribute MPC titles. Hardware manufacturers including Fujitsu, NCR, NEC, Olivetti, Philips, Tandy and Headland Technology, will sell MPC systems or upgrade kits. And 85 companies are listed as committed to developing titles for MPC platforms.

Does all of the above make MPC a winner? For the customer, upgrading a PC or buying a ready-to-run MPC system, then paying hundreds of dollars for titles, still seems expensive in relation to the benefits. It is not obvious that one of the titles shown in New York will be the VisiCalc or Lotus 1-2-3 of the MPC world, or that in the absence of full-motion video or drastically lower prices, MPC will be successful. On the other hand, MPC does give users a way to turn the most widely used computer platform into something quite remarkable.

Bernard Banet

>I/O
Ten principles for establishing the mass market for interactive media

Trip Hawkins
Chairman, Electronic Arts
President and CEO, SMSG, Inc.

Interactive software veteran Trip Hawkins recently stepped down from active management of his pioneering videogame firm Electronic Arts to head SMSG, Inc. — a new venture formed by Electronic Arts, Time Warner Enterprises and the venture capital firm Kleiner Perkins Caufield & Byers. Its charter is to catalyze the market for interactive media. Thus it’s safe to assume that the following principles will find their way into SMSG’s business plan.

It’s like a high school algebra problem: can you find the missing $12 billion?

In 1990, U.S. consumers spent $5 billion on movie tickets and $7 billion in arcade machines. So the “location-based” markets for movies and “interactive entertainment” would appear to be similar in size. Why, then, did those same consumers spend $14 billion to watch videos at home, but only $2 billion on interactive entertainment (home computer and video game software)?

The huge success of the VHS videotape standard explains the $14 billion. The convenience, quality, and price of VHS meets consumers’ needs. But until we have a comparable interactive media system, one that both meets consumers’ needs and is an industry standard, the interactive media software market at home will fall billions and billions of dollars short of its potential.

There is hope. Fortunately there is hope for interactive media. It is technically feasible for an interactive system with features as compelling as a VHS player to be marketed as early as 1993 or 1994. Can the industry get organized to create and establish a standard interactive system in that time frame? Here are ten guiding principles that can help us get there from here.

1. The interactive media system of the future can ride a Trojan horse. An interactive system requires digital computer technology, which is predominantly used in the PC and video game markets. And soon we will witness the complete computerization of consumer electronics. Audio has already gone digital. Now the cable, film, and TV are starting to go digital as well. Interactive media, and the consumer, will benefit from the inevitable synergy. For example, a consumer may justify the purchase of a CD-ROM drive because it can be used to watch digital films.

2. The mass-market interactive media box may be a computer, but it could also be a cable TV receiver.

The cable industry is moving to fiber optics in order to transmit up to 1,000 channels at once. It takes a computer to keep track of that much information. Cable operators install a new box in a home every five to seven years. With over 60 million cabled homes now, up to 10 million new boxes could be installed in a single year. Great synergy will result if those boxes can be expanded with a CD-ROM drive into complete interactive media systems.

3. The film industry, not the computer industry, will drive the future of compression technology. The biggest future customer of compression technology will be the movie companies. Whatever technology they bless as “good enough” will become a de facto standard in all other applications. For the same reasons, VHS videotape players have replaced professional players in schools and most business uses. If the film industry supports a digital movie format based on CD-ROM, the victory will be for interactive media, and the spoils will go to the consumer.

4. Despite the high profile, digital HDTV is not a key factor. Some experts think it will be 2005 before the U.S. installed base of digital HDTVs reaches one million. It’s a simple and unavoidable cost problem. Meanwhile, the world’s population of analog color TVs is well over 300 million, and most will still be in use at the start of the next millennium. Interactive media can therefore focus on the simpler digital problem of driving pixels on current TVs, with an eye towards supporting HDTV in the future.

5. Real-time animation is more important than full-motion video. Only something really new can drive the creation of a new market, and the only new thing about interactive media is that it is interactive. This means that the screen must respond to the user’s input, hopefully at a speed of animation that feels like TV. TV’s effective animation rate is more than six million pixels per second. By contrast, current PCs, video games, and CD-ROM systems are typically in the range of a million pixels per second.
Adding compression technology to current PCs may create full- motion digital video, but it won’t increase the interactive animation capability. With digital video, we are merely spraying a firehose of pre-computed digital frames onto the screen of a digital projector: a marvelous capability but not an interactive one. Interaction requires significantly changing the content of the individual frames, which is not possible if they were only pre-calculated and stored as a digital filmstrip.

6. CD-ROM is a two-edged sword. CD-ROM is a critical storage device for multimedia. But it is only a storage device; the computer architecture determines what the consumer will see and hear. Current efforts to match CD-ROM with 16-bit PCs and video games may backfire. CD-ROMS are slower than hard disk drives and cartridges. And while the quality of real-time animation of one million pixels is outstanding for a $150 cartridge game system, consumers will expect much more from a CD-ROM system’s higher price. Matching CD-ROM technology with a 16-bit computer architecture is like trying to pull an 18-wheel truck with a bicycle.

7. The consumer will not pay $1,000 for an interactive system. Sure, maybe a few will. Maybe even a million. But not 300 million. Consumers don’t have that kind of money, and they aren’t educated about interactive media. Yes, VCRs and CD audio were launched at $1,000. But nearly every household was already a consumer of TV and music, and with an educated market that big, there were enough innovative consumers to jump-start the marketplace. The industry shouldn’t foot a probable $100 million marketing and education bill unless it is sure the consumer, once educated, will buy. Nintendo did, but its price point was $100. With enough capability, a $500 player might work for interactive media now that the benefits of CD audio and video games have been established.

8. We need to turn a new page in market and technology history. Every single CD-ROM system that has been announced is based around a 16-bit microprocessor architecture, typically with a PC or video game bias. These systems, even in an evolved form, lack the requisite performance for establishing a new mass market. Designed for general-purpose computing, they also carry excess baggage that adds unnecessarily to manufacturing cost. A fresh start is needed, using 1990s technologies that focus on the precise needs of interactive media alone.

9. The politicians and lawyers may be as important as the engineers. Interactive media needs legislation to make CD-ROM software rental and consumer write-once optical disc drives illegal, and to adapt “fair use” law to address multimedia copyright issues, such as the question of “fair modification.” New pricing structures need to be developed for the use of copyrighted materials. A multimedia equivalent of ASCAP is needed.

10. We need a SPARC or an ACE for interactive media. The interactive media market cannot be built by a single company. The business PC industry has recognized the need to move beyond the current generation of technology, and the opportunity for cooperation that is provided by the currently lax antitrust climate. Sun’s SPARC chip, the ACE alliance, and the Apple-IBM partnership are examples. The interactive media industry needs such coalitions that will address the issues raised above.

In the same sense that Hollywood’s movie libraries awaited VHS video to unleash their huge market potential, the world’s software and media companies sit and wait for an interactive media future that makes sense. Like a boulder at the top of a cliff, these companies have the energy potential to bust open the market, but need an organized industry effort to get the ball rolling. Only then will we find the missing $12 billion.

>EVENTS

Microprocessor Forum
Nov. 6-7, San Francisco, CA
Microprocessor Report
(800) 327-9893, fax (415) 549-4342
The ultra if you’re interested in the hardware of the future. This year’s forum includes sessions on portable computers and other key issues that will affect the world of digital media.

Online/CD-ROM ’91 Conf. & Expo.
Nov. 11-13, San Francisco, CA
Online, Inc.
(203) 227-8466, fax (203) 222-0122
Workshops, conference sessions and exhibits will include the themes of practical database searching, state-of-the-art microcomputing for searchers, CD-ROMS in use, new titles, hardware/software, networking and CD-ROM LANs.

Ecotech Conference: Discovering the New Mind in Business
Nov. 14-17, Monterey, CA
Tides Foundation
(619) 259-5110, fax (619) 259-1495
Do the right thing. This new conference will explore pressing ecological issues — both environmental and social — and how businesses can revise their strategies and ethics in order to effect change.

MultiMedia Expo
Nov. 18-20, San Jose, CA
American Expositions, Inc.
(212) 226-4141
Computer and communications professionals will attend presentations and workshops on topics such as multimedia telecommunications and making multimedia affordable.

8th Annual Flat Information Displays Conf. & Exhibition
Dec. 10-12, Santa Clara, CA
Stanford Resources
(415) 322-0247, fax (415) 322-0469
This “interactive forum” lets users and designers meet with vendors of this critical technology, a central component of digital media, to discuss requirements and applications.

1992 International Winter Consumer Electronics Show
Jan. 9-12, 1992, Las Vegas, NV
Electronics Industries Association
(202) 457-8700
Oct. 4 was the last day to return a hotel reservation form for Winter CES, so if you want a place to stay, you’d better get on it.

-30-