WO2010150220A1 - Method and system for controlling the rendering of at least one media signal - Google Patents

Method and system for controlling the rendering of at least one media signal Download PDF

Info

Publication number
WO2010150220A1
WO2010150220A1 PCT/IB2010/052882 IB2010052882W WO2010150220A1 WO 2010150220 A1 WO2010150220 A1 WO 2010150220A1 IB 2010052882 W IB2010052882 W IB 2010052882W WO 2010150220 A1 WO2010150220 A1 WO 2010150220A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
rendering
media signal
reference direction
perceptible
Prior art date
Application number
PCT/IB2010/052882
Other languages
French (fr)
Inventor
Arnoldus Werner Johannes Oomen
Erik Gosuinus Petrus Schuijers
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2010150220A1 publication Critical patent/WO2010150220A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04804Transparency, e.g. transparent or translucent windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/62Semi-transparency

Definitions

  • the invention relates to a method of controlling a system for rendering in perceptible form at least one media signal, including: obtaining at least one media signal; determining a deviation of a user's viewing direction from a reference direction; and causing at least one rendering device to render an obtained media signal in perceptible form through an associated modality, wherein the at least one rendering device is controlled to make an environment of the user relatively more perceptible through the same modality when the user's viewing direction deviates from the reference direction.
  • the invention also relates to a system for controlling the rendering in perceptible form of at least one media signal, including: an interface for obtaining at least one media signal; a system for determining a deviation of a user's viewing direction from a reference direction; and at least one rendering device for rendering an obtained media signal in perceptible form through an associated modality, the system controlling the at least one rendering device to make an environment of the user relatively more perceptible through the same modality when the user's viewing direction deviates from the reference direction.
  • the invention also relates to an apparatus for rendering a media signal in perceptible form through an associated modality, including an interface for obtaining the media signal.
  • the invention also relates to a computer program. BACKGROUND OF THE INVENTIOIN
  • US 2002/0044152 discloses a body- mounted wearable computer worn by a user.
  • the computer includes a central processing unit, a memory and a storage device.
  • the computer has a variety of body- worn output devices, including a head-mounted display in the form of an eyeglass -mounted display.
  • the eyeglass-mounted display is implemented as a display type that allows the user to view real world images from their surroundings while simultaneously overlaying or otherwise presenting computer- generated information to the user.
  • a condition-dependent output supplier (CDOS) system is stored in memory.
  • the CDOS system monitors the user and the user's environment, and creates and maintains an updated model of the current condition of the user.
  • the CDOS system may be used to generate data indicating where the user is looking. If the user is looking in the correct direction, a transparent User Interface presents data in conjunction with the real world view of that direction. If the user turns his or her head, the CDOS system detects the movement and informs the application program, enabling the transparent User Interface to remove the information from the display.
  • a problem of the known system is that it is configured to add auxiliary information to a view of the real world, but is less suitable for providing an immersive viewing experience. If the view of the real world is blocked to provide an immersive viewing experience, the user would be cut off from his or her environment.
  • This object is achieved by the method according to the invention, which includes: obtaining at least one media signal;
  • the viewing direction can be determined using only head tracking, only eye tracking or a combination of both methods.
  • the at least one rendering device By causing at least one rendering device to render an obtained media signal in perceptible form through an associated modality, wherein the at least one rendering device is controlled to make an environment of the user relatively more perceptible through the same modality when the user's viewing direction deviates from the reference direction, it is possible to provide a totally immersive experience when the user is looking in the reference direction, but to allow the user to interact with his or her environment when he or she turns away from the reference direction.
  • the modality that is to say the sense through which the user can receive the output of a system comprising the rendering device, can be vision, hearing, tactition, olfaction or thermoception, for example.
  • the reference direction By using as the reference direction a direction generally fixed to at least part of a body of the user, at least when the user is positioned in a predetermined orientation relative to at least one of the rendering devices, the user is not disoriented and a natural interface is provided to the user.
  • the reference direction will generally be the direction in which the user looks straight ahead, which is the most comfortable direction for listening to a reproduction of an audio signal and/or viewing a reproduction of a video signal.
  • the environment becomes more perceptible, so that interaction with the environment becomes possible.
  • the system need not have any navigation capabilities to implement the function of changing the level of immersion.
  • the system can be implemented in portable media reproduction apparatus without needing sensors to determine its own position and/or orientation relative to a landmark in its environment.
  • the at least one rendering device is controlled to make an environment of the user relatively more perceptible through the same modality with increasing deviation of the user's viewing direction from the reference direction, at least to a certain limit.
  • the environment is gradually blended in as the user turns further away from the reference direction.
  • This embodiment has the effect that a user becomes more sensitive to events in his or her environment as he or she turns away.
  • An increase in awareness without complete loss of the reproduction of the media signal is achievable with a relatively small movement of the head (or eyes).
  • the reference direction is a direction in which the user's head is oriented in an essentially forward-looking direction relative to the user's trunk.
  • This embodiment allows the user to listen to and/or watch a reproduction in perceptible form of the at least one media signal in a natural posture. It provides a natural interface, because people's head movement from a forward-looking direction is generally indicative of a shift in momentary attention from whatever it is they are doing to events, objects or persons in their environment.
  • the deviation of a user's viewing direction from a reference direction is measured as a deviation from a reference direction fixed to and moving with at least part of the body of the user.
  • This embodiment does not rely on the user being in a particular position or orientation, e.g. in a particular position in a chair, for reliable results.
  • the user can listen to and/or watch a reproduction in perceptible form of the at least one media signal in a natural posture, but can move around.
  • This embodiment is therefore also suitable for use with portable rendering devices.
  • the at least one rendering device includes a head- mounted rendering device.
  • This embodiment provides an easy implementation of the method, because devices to measure the user's viewing direction can easily be integrated in the head- mounted apparatus that comprises the rendering device. In particular head tracking is easy to implement.
  • the at least one rendering device includes at least one head-mounted loudspeaker.
  • This embodiment is relatively lightweight, easy and cheap to implement and can be combined with many different types of common rendering apparatus, in particular headphones.
  • the at least one rendering device is comprised in a system for active noise cancellation, and wherein controlling at least one rendering device to make an environment of the user relatively more perceptible includes varying a level of active noise cancellation.
  • This embodiment allows for a fully immersive experience to be provided when the viewing direction is generally aligned with the reference direction.
  • the noise cancellation makes the environment generally imperceptible in a controlled way. It also provides an easy way of mixing in more or less sound from the user's environment.
  • An embodiment includes applying to a signal representative of at least one of the user's viewing direction and deviation of the user's viewing direction from a reference direction a filter for at least partially rejecting at least one of rapidly varying and small variations in the user's viewing direction, wherein the at least one rendering device is controlled on the basis of the filtered signal.
  • This embodiment ensures that the user's viewing or listening experience is not disturbed by unintended head movements or a limited degree of shaking.
  • a further embodiment includes adding a perceptible cue to at least one reproduction in perceptible form of at least one media signal upon detecting an event of a pre-determined type in an environment of the user.
  • This embodiment provides a user with a cue to turn his or her head to take heed of the environment whenever a certain event takes place. An example would be when someone sits down or gets up next to the user. The user can then decide to turn their head, whereas they might otherwise not have done so.
  • the cue is provided independently of the user's viewing direction, so that the user who does not turn his or her head can still be made aware of important changes in his or her environment.
  • the at least one rendering device includes a head-mounted display.
  • This embodiment provides a particularly suitable implementation of the method, because the head-mounted display will generally block out all of a user's environment.
  • a further feature of this embodiment is that it becomes relatively easy to track at least a user's head movements by means of sensors in the head- mounted system that comprises the display.
  • this embodiment of the method includes obtaining a video signal representative of a user's environment using a camera, in particular a camera following movement of the user's viewing direction, and combining a video signal from the camera with at least one video signal comprised in the at least one media signal for rendering on the display.
  • This embodiment can be implemented with a wide range of display types. Different types of mixing of the environment and the perceptible reproduction of the media signal are also relatively easy to implement through appropriate video signal processing techniques.
  • combining includes overlaying as a picture-in-picture a window for rendering the video signal obtained from the camera on a background image representative of the video signal comprised in the at least one media signal.
  • the head-mounted display device includes a device for optically mixing a representation of a video signal comprised in the at least one media signal and at least a component of light admitted from the environment.
  • the system for controlling the rendering in perceptible form of at least one media signal includes: an interface for obtaining at least one media signal; a system for determining a deviation of a user's viewing direction from a reference direction; and at least one rendering device for rendering an obtained media signal in perceptible form through an associated modality, the system controlling the at least one rendering device to make an environment of the user relatively more perceptible through the same modality when the user's viewing direction deviates from the reference direction, wherein the reference direction is a direction generally fixed to at least part of a body of the user, at least when the user is positioned in a pre-determined orientation relative to at least one of the rendering devices.
  • system is configured to carry out a method according to the invention.
  • the apparatus for rendering a media signal in perceptible form through an associated modality includes an interface for obtaining the media signal and a control system according to the invention.
  • a computer program including a set of instructions capable, when incorporated in a machine-readable medium, of causing a system having information processing capabilities to perform a method according to the invention.
  • FIG. 1 is a block diagram illustrating some components of a head-mounted system for rendering audiovisual media
  • Fig. 2 is a sketch to illustrate a variable on the basis of which settings of the system of Fig. 1 are adjusted;
  • Fig. 3 is a flow chart showing steps in the adjustment of the settings of the system of Fig. 1 ; and Fig. 4 is a sketch providing a top view of a system for rendering media signals in perceptible form to illustrate an alternative variable on which an alternative adjustment method is based.
  • Fig. 1 illustrates an apparatus for rendering in perceptible form an audio signal, video signal or audiovisual signal.
  • the apparatus includes a portable data processing device 1, e.g. a smart phone, media player, PDA (Personal Digital Assistant), or the like.
  • the portable data processing device 1 includes a network interface 2 for receiving the media signal, which can be an interface to a wireless local area network, cellular phone network, Wide-Area Network (such as a WiMax network), etc.
  • the interface provides access to a data transfer medium for receiving media signals.
  • the portable data processing device 1 can also obtain such signals from a portable data storage medium using an appropriate reader 3.
  • Suitable storage media include solid-state memory devices, optical disks, magnetic and magneto-optical recording media, etc.
  • media signals in the illustrated embodiment are pre-recorded, the principles outlined herein apply equally to locally generated media signals.
  • such signals include audio, visual and audiovisual signals generated in a gaming device.
  • the media signals are signals independent of a direct environment of the portable data processing device 1, more particularly artificial or pre-recorded media signals.
  • the portable data processing device 1 operates under the control of a data processing unit 4 interfacing with main memory 5 to execute instructions stored in non- volatile memory, e.g. on a data storage device 6.
  • Video signals are processed by a video codec 7 to provide a video signal to a display device 8 comprised in a head-mounted device 9 worn on a head 10 of a user 11 (see also Fig. 2).
  • the head-mounted device 9 illustrated in Fig. 1 comprises optics (not shown in detail) for optically mixing the representation of the video signal generated by the display device 8 and at least a component of light admitted from the environment through a device 12 for admitting a variable amount of ambient light in the form of an image of the environment of the user 11.
  • the device 12 can be a polariser.
  • the portable data processing device 1 includes an interface 13 for generating control signals for this device 12.
  • the image of the environment and the representation of the video signal can be combined in a polarising beam splitter, for instance.
  • Structural details of a display engine for the head-mounted device 9 are provided e.g. in US 2007/0081256 Al.
  • a head- mounted device 9' includes a display device 8' and a camera 14, which is fixed in position relative to the head-mounted device 9', generally such that it is directed in the direction in which a user wearing the head-mounted device 9' is looking.
  • the camera 14 provides a video signal to a camera interface 15 of the portable data processing device 1.
  • the latter combines the video signal from the camera 14 with a video signal obtained from a media signal to provide a video signal for the display device 8'.
  • the illustrated portable data processing device 1 also includes an audio output stage 16 for providing signals to speakers 17,18 combined with the head-mounted device 9,9'.
  • the system comprising the speakers 17,18, a microphone 19 connected to an interface 20 of the portable data processing device 1 and the portable data processing device 1 implement a system for active noise cancellation that cancels more or less of at least certain types of ambient sound through the generation of phase-shifted sound.
  • the portable data processing device 1 is programmed to control at least one of the audio and video rendering systems in such a way as to make more or less of the environment audible and/or visible relative to the audio and video obtained from an external source that it is rendering.
  • the representation of the video signal from the media signal is generated so as to cover essentially all of the displayable area of a screen of the display device 8,8'.
  • the environment is made visible to a greater or lesser degree. This degree of perceptibility is controlled on the basis of an input signal representative of a deviation
  • the reference direction 22 as shown in Fig.
  • the portable data processing device 1 has an interface 23 to one or more sensors 24,25 for determining the deviation
  • the deviation can be determined by integrating signals from accelerometers, electronic compasses or gyroscopes comprised in the head-mounted device 9.
  • a calibration step is carried out to determine where the reference direction 22, corresponding to the direction in which the user's head 10 is oriented in an essentially forward-looking direction, is located.
  • one or more of the sensors 24,25 is provided to locate and determine the orientation of the part of the body to which the reference direction 22 is fixed and with which it moves. As an example, the location of the shoulders could be determined. More detailed examples of suitable methods are given in European patent application 09152769.7, filed on 13 February 2009 in the name of the present applicant (PHO 12962).
  • the tracking of the head position can be combined or replaced by the tracking of the eye position.
  • the viewing direction 21 can be determined as a combination of the yaw angle of the head 10 and the eyes of the user 11. In general, tracking of the head will suffice. This would leave the user free to move his or her eyes to focus on different aspects of the video being rendered without any effects on how much of the environment is made visible.
  • the portable data processing device 1 when the portable data processing device 1 has been commanded to render at least one of an audio signal, a video signal and an audiovisual signal, the portable data processing device 1 continually carries out a method comprising the step of determining the deviation
  • or the signal representative of the viewing direction 21, if that direction is determined separately from the reference direction 22 and then subtracted, is filtered (step 27).
  • the filter is a filter arranged to reject rapidly varying and/or small variations in the deviation
  • hysteresis can be applied, so that only sustained changes in deviation
  • Another type of filter would be a low-pass filter or similar integrating system.
  • the environment of the user 11 is imperceptible to a maximum extent.
  • the reproduction of the video signal occupies essentially the entire available viewing area on a screen of the display device 8, and the light- admitting device 12 is controlled so as to admit no light for forming an image of the environment.
  • the video signal from the camera 14 is not combined with the video signal comprised in the media signal being rendered.
  • the level of noise cancellation is set to maximum.
  • the volume of the sound from the speakers 17,18 can be set to a (local) maximum within the settings provided by the user 11.
  • the portable data processing device 1 is configured to create an at least two-dimensional sound image, e.g. using head-related transfer functions. In that case, the apparent distance from the virtual sound source to the user 11 can be set to a minimum when the viewing direction 21 and the reference direction 22 coincide.
  • the relation between the ratio of perceptibility of the media signal and the environment can increase continuously or in discrete steps.
  • the relation can be linear, but will generally be non-linear, such that the relative perceptibility of the environment increases more rapidly with the deviation
  • the image of the environment is generally combined with the entire image rendered from the video signal obtained through the network interface 2, from a recording medium in the reader 3 or a file on the data storage device 6.
  • the rendered video signal will become more or less transparent over the entire displayed area.
  • the environment may be represented in a, possibly more or less transparent, window overlaid on the video image, with the window increasing in at least one of size and opaqueness as the deviation
  • the level of active noise cancellation will decrease with increasing deviation
  • the volume may also decrease. Where a soundscape is being created, the virtual source of sound will be caused to move further away from the user 11 with increasing deviation
  • the portable data processing device 1 is arranged to monitor (step 30) the environment of the user 11 for events of a pre- determined type, in particular, the arrival of another person.
  • the wireless network interface 2 can be used to detect the arrival of another portable data processing device 1, e.g. one belonging to a particular designated other user, in the immediate vicinity of the user 11. If such an event is detected, then an audible and/or visible cue is provided (step 31). In one embodiment, this cue also gives directional information to the user 11. For example, an audible cue may be given via only one of left and right speakers 17,18. A visible cue may be given on the left or right side of an area of display provided by the display device 8,8'.
  • FIG. 4 an alternative embodiment is sketched in outline.
  • This reference direction 22 is fixed to at least part of the body of a user (not shown in Fig. 4) only in the sense that it relies on the user being positioned in a pre-determined orientation relative to speakers 17', 18' and a television set 32.
  • a chair 33 is used as an example of a device for limiting the range of positions the user can assume, with the speakers 17', 18' being fixed to the chair 33.
  • This embodiment like the embodiment discussed in relation to Fig. 1, has the effect that the system need not determine the position of any objects external to the system. It need not be aware of what is in the room or how the room is laid out. All that is required is an arrangement for determining a viewing direction 21' of a user, the reference direction 22 being known and fixed by the system.
  • the head- mounted device 9, 9' is dispensed with, so that an audio-only variant results.
  • the user need not operate any special controls when e.g. watching television and turning to someone else in the room.
  • This other person need not be provided with any special beacons to enable the system to adjust the volume and/or level of noise cancellation in response to the user turning his or her head.
  • a stationary video camera and image analysis is used to determine the deviation

Abstract

A method of controlling a system for rendering in perceptible form at least one media signal, includes obtaining at least one media signal. A deviation of a user's viewing direction from a reference direction is determined. At least one rendering device is caused to reproduce an obtained media signal in perceptible form through an associated modality, wherein the at least one rendering device is controlled to make an environment of the user relatively more perceptible through the same modality when the user's viewing direction deviates from the reference direction. A direction generally fixed to at least part of a body of the user, at least when the user is positioned in a pre-determined orientation relative to at least one of the rendering devices, is used as the reference direction.

Description

Method and system for controlling the rendering of at least one media signal
FIELD QF THE INVENTION
The invention relates to a method of controlling a system for rendering in perceptible form at least one media signal, including: obtaining at least one media signal; determining a deviation of a user's viewing direction from a reference direction; and causing at least one rendering device to render an obtained media signal in perceptible form through an associated modality, wherein the at least one rendering device is controlled to make an environment of the user relatively more perceptible through the same modality when the user's viewing direction deviates from the reference direction.
The invention also relates to a system for controlling the rendering in perceptible form of at least one media signal, including: an interface for obtaining at least one media signal; a system for determining a deviation of a user's viewing direction from a reference direction; and at least one rendering device for rendering an obtained media signal in perceptible form through an associated modality, the system controlling the at least one rendering device to make an environment of the user relatively more perceptible through the same modality when the user's viewing direction deviates from the reference direction.
The invention also relates to an apparatus for rendering a media signal in perceptible form through an associated modality, including an interface for obtaining the media signal.
The invention also relates to a computer program. BACKGROUND OF THE INVENTIOIN
US 2002/0044152 discloses a body- mounted wearable computer worn by a user. The computer includes a central processing unit, a memory and a storage device. The computer has a variety of body- worn output devices, including a head-mounted display in the form of an eyeglass -mounted display. The eyeglass-mounted display is implemented as a display type that allows the user to view real world images from their surroundings while simultaneously overlaying or otherwise presenting computer- generated information to the user. A condition-dependent output supplier (CDOS) system is stored in memory. The CDOS system monitors the user and the user's environment, and creates and maintains an updated model of the current condition of the user. If an application program is generating geographical or spatial relevant information that should only be displayed when the user is looking in a specific direction, the CDOS system may be used to generate data indicating where the user is looking. If the user is looking in the correct direction, a transparent User Interface presents data in conjunction with the real world view of that direction. If the user turns his or her head, the CDOS system detects the movement and informs the application program, enabling the transparent User Interface to remove the information from the display.
A problem of the known system is that it is configured to add auxiliary information to a view of the real world, but is less suitable for providing an immersive viewing experience. If the view of the real world is blocked to provide an immersive viewing experience, the user would be cut off from his or her environment.
SUMMARY OF THE INVENTION
It is an object of the invention to provide a method, system, apparatus and computer program of the types defined above that allow the media signal to be rendered in perceptible form so as to provide an immersive experience without making contact with the real-world environment of the user impossible. This object is achieved by the method according to the invention, which includes: obtaining at least one media signal;
9 determining a deviation of a user's viewing direction from a reference direction; and causing at least one rendering device to render an obtained media signal in perceptible form through an associated modality, wherein the at least one rendering device is controlled to make an environment of the user relatively more perceptible through the same modality when the user's viewing direction deviates from the reference direction, wherein a direction generally fixed to at least part of a body of the user, at least when the user is positioned in a pre-determined orientation relative to at least one of the rendering devices, is used as the reference direction.
The viewing direction can be determined using only head tracking, only eye tracking or a combination of both methods.
By causing at least one rendering device to render an obtained media signal in perceptible form through an associated modality, wherein the at least one rendering device is controlled to make an environment of the user relatively more perceptible through the same modality when the user's viewing direction deviates from the reference direction, it is possible to provide a totally immersive experience when the user is looking in the reference direction, but to allow the user to interact with his or her environment when he or she turns away from the reference direction. In this context, the modality, that is to say the sense through which the user can receive the output of a system comprising the rendering device, can be vision, hearing, tactition, olfaction or thermoception, for example. By using as the reference direction a direction generally fixed to at least part of a body of the user, at least when the user is positioned in a predetermined orientation relative to at least one of the rendering devices, the user is not disoriented and a natural interface is provided to the user. The reference direction will generally be the direction in which the user looks straight ahead, which is the most comfortable direction for listening to a reproduction of an audio signal and/or viewing a reproduction of a video signal. When the user turns to one side, the environment becomes more perceptible, so that interaction with the environment becomes possible. Because the reference direction is fixed to at least part of a body of the user, at least when the user is positioned in a pre-determined orientation relative to at least one of the rendering devices, the system need not have any navigation capabilities to implement the function of changing the level of immersion. Moreover, the system can be implemented in portable media reproduction apparatus without needing sensors to determine its own position and/or orientation relative to a landmark in its environment.
In an embodiment, the at least one rendering device is controlled to make an environment of the user relatively more perceptible through the same modality with increasing deviation of the user's viewing direction from the reference direction, at least to a certain limit.
Thus, the environment is gradually blended in as the user turns further away from the reference direction. This embodiment has the effect that a user becomes more sensitive to events in his or her environment as he or she turns away. An increase in awareness without complete loss of the reproduction of the media signal is achievable with a relatively small movement of the head (or eyes).
In an embodiment, the reference direction is a direction in which the user's head is oriented in an essentially forward-looking direction relative to the user's trunk.
This embodiment allows the user to listen to and/or watch a reproduction in perceptible form of the at least one media signal in a natural posture. It provides a natural interface, because people's head movement from a forward-looking direction is generally indicative of a shift in momentary attention from whatever it is they are doing to events, objects or persons in their environment.
In an embodiment, the deviation of a user's viewing direction from a reference direction is measured as a deviation from a reference direction fixed to and moving with at least part of the body of the user.
This embodiment does not rely on the user being in a particular position or orientation, e.g. in a particular position in a chair, for reliable results. The user can listen to and/or watch a reproduction in perceptible form of the at least one media signal in a natural posture, but can move around. This embodiment is therefore also suitable for use with portable rendering devices.
In an embodiment, the at least one rendering device includes a head- mounted rendering device.
This embodiment provides an easy implementation of the method, because devices to measure the user's viewing direction can easily be integrated in the head- mounted apparatus that comprises the rendering device. In particular head tracking is easy to implement.
In an embodiment, the at least one rendering device includes at least one head-mounted loudspeaker. This embodiment is relatively lightweight, easy and cheap to implement and can be combined with many different types of common rendering apparatus, in particular headphones.
In a variant, the at least one rendering device is comprised in a system for active noise cancellation, and wherein controlling at least one rendering device to make an environment of the user relatively more perceptible includes varying a level of active noise cancellation.
This embodiment allows for a fully immersive experience to be provided when the viewing direction is generally aligned with the reference direction. The noise cancellation makes the environment generally imperceptible in a controlled way. It also provides an easy way of mixing in more or less sound from the user's environment.
An embodiment includes applying to a signal representative of at least one of the user's viewing direction and deviation of the user's viewing direction from a reference direction a filter for at least partially rejecting at least one of rapidly varying and small variations in the user's viewing direction, wherein the at least one rendering device is controlled on the basis of the filtered signal.
This embodiment ensures that the user's viewing or listening experience is not disturbed by unintended head movements or a limited degree of shaking.
A further embodiment includes adding a perceptible cue to at least one reproduction in perceptible form of at least one media signal upon detecting an event of a pre-determined type in an environment of the user.
This embodiment provides a user with a cue to turn his or her head to take heed of the environment whenever a certain event takes place. An example would be when someone sits down or gets up next to the user. The user can then decide to turn their head, whereas they might otherwise not have done so. The cue is provided independently of the user's viewing direction, so that the user who does not turn his or her head can still be made aware of important changes in his or her environment. In an embodiment of the method, the at least one rendering device includes a head-mounted display.
This embodiment provides a particularly suitable implementation of the method, because the head-mounted display will generally block out all of a user's environment. A further feature of this embodiment is that it becomes relatively easy to track at least a user's head movements by means of sensors in the head- mounted system that comprises the display.
In a variant, this embodiment of the method includes obtaining a video signal representative of a user's environment using a camera, in particular a camera following movement of the user's viewing direction, and combining a video signal from the camera with at least one video signal comprised in the at least one media signal for rendering on the display.
This embodiment can be implemented with a wide range of display types. Different types of mixing of the environment and the perceptible reproduction of the media signal are also relatively easy to implement through appropriate video signal processing techniques.
In one implementation of this variant, combining includes overlaying as a picture-in-picture a window for rendering the video signal obtained from the camera on a background image representative of the video signal comprised in the at least one media signal.
This type of combination avoids that the user sees nothing very clearly when the two video signals are combined.
In an alternative variant, the head-mounted display device includes a device for optically mixing a representation of a video signal comprised in the at least one media signal and at least a component of light admitted from the environment.
This variant can be implemented using a relatively lightweight system, as no camera is required. Furthermore, the computational burden of video processing is avoided. There are also no problems of camera focus, since the user focuses on whatever is visible of the environment. According to another aspect, the system for controlling the rendering in perceptible form of at least one media signal according to the invention includes: an interface for obtaining at least one media signal; a system for determining a deviation of a user's viewing direction from a reference direction; and at least one rendering device for rendering an obtained media signal in perceptible form through an associated modality, the system controlling the at least one rendering device to make an environment of the user relatively more perceptible through the same modality when the user's viewing direction deviates from the reference direction, wherein the reference direction is a direction generally fixed to at least part of a body of the user, at least when the user is positioned in a pre-determined orientation relative to at least one of the rendering devices.
In an embodiment, the system is configured to carry out a method according to the invention.
According to another aspect, the apparatus for rendering a media signal in perceptible form through an associated modality according to the invention includes an interface for obtaining the media signal and a control system according to the invention.
According to a further aspect of the invention, there is provided a computer program including a set of instructions capable, when incorporated in a machine-readable medium, of causing a system having information processing capabilities to perform a method according to the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will be explained in further detail with reference to the accompanying drawings, in which: Fig. 1 is a block diagram illustrating some components of a head-mounted system for rendering audiovisual media;
Fig. 2 is a sketch to illustrate a variable on the basis of which settings of the system of Fig. 1 are adjusted;
Fig. 3 is a flow chart showing steps in the adjustment of the settings of the system of Fig. 1 ; and Fig. 4 is a sketch providing a top view of a system for rendering media signals in perceptible form to illustrate an alternative variable on which an alternative adjustment method is based.
DETAILED DESCRIPTION
Fig. 1 illustrates an apparatus for rendering in perceptible form an audio signal, video signal or audiovisual signal. The apparatus includes a portable data processing device 1, e.g. a smart phone, media player, PDA (Personal Digital Assistant), or the like. The portable data processing device 1 includes a network interface 2 for receiving the media signal, which can be an interface to a wireless local area network, cellular phone network, Wide-Area Network (such as a WiMax network), etc. Thus, the interface provides access to a data transfer medium for receiving media signals. The portable data processing device 1 can also obtain such signals from a portable data storage medium using an appropriate reader 3. Suitable storage media include solid-state memory devices, optical disks, magnetic and magneto-optical recording media, etc. Although media signals in the illustrated embodiment are pre-recorded, the principles outlined herein apply equally to locally generated media signals. In particular, such signals include audio, visual and audiovisual signals generated in a gaming device. In all these variants, the media signals are signals independent of a direct environment of the portable data processing device 1, more particularly artificial or pre-recorded media signals.
The portable data processing device 1 operates under the control of a data processing unit 4 interfacing with main memory 5 to execute instructions stored in non- volatile memory, e.g. on a data storage device 6. Video signals are processed by a video codec 7 to provide a video signal to a display device 8 comprised in a head-mounted device 9 worn on a head 10 of a user 11 (see also Fig. 2).
The head-mounted device 9 illustrated in Fig. 1 comprises optics (not shown in detail) for optically mixing the representation of the video signal generated by the display device 8 and at least a component of light admitted from the environment through a device 12 for admitting a variable amount of ambient light in the form of an image of the environment of the user 11. In particular, the device 12 can be a polariser. The portable data processing device 1 includes an interface 13 for generating control signals for this device 12. The image of the environment and the representation of the video signal can be combined in a polarising beam splitter, for instance. Structural details of a display engine for the head-mounted device 9 are provided e.g. in US 2007/0081256 Al.
An alternative embodiment is illustrated in dotted lines in Fig. 1. In the alternative embodiment, a head- mounted device 9' includes a display device 8' and a camera 14, which is fixed in position relative to the head-mounted device 9', generally such that it is directed in the direction in which a user wearing the head-mounted device 9' is looking. The camera 14 provides a video signal to a camera interface 15 of the portable data processing device 1. The latter combines the video signal from the camera 14 with a video signal obtained from a media signal to provide a video signal for the display device 8'.
The illustrated portable data processing device 1 also includes an audio output stage 16 for providing signals to speakers 17,18 combined with the head-mounted device 9,9'. In the illustrated embodiment, the system comprising the speakers 17,18, a microphone 19 connected to an interface 20 of the portable data processing device 1 and the portable data processing device 1 implement a system for active noise cancellation that cancels more or less of at least certain types of ambient sound through the generation of phase-shifted sound.
The portable data processing device 1 is programmed to control at least one of the audio and video rendering systems in such a way as to make more or less of the environment audible and/or visible relative to the audio and video obtained from an external source that it is rendering. Generally, the representation of the video signal from the media signal is generated so as to cover essentially all of the displayable area of a screen of the display device 8,8'. To allow the user to see the environment properly and with little effort, the environment is made visible to a greater or lesser degree. This degree of perceptibility is controlled on the basis of an input signal representative of a deviation |#| of a user's viewing direction 21 from a reference direction 22 (see Fig. 2). The reference direction 22 as shown in Fig. 2 is a direction generally fixed to at least part of the body of a user 11. It is fixed to and moves with at least part of the body of the user 11. As illustrated, the reference direction is fixed to the trunk of the user 11, or at least some other part than the head 10. The portable data processing device 1 has an interface 23 to one or more sensors 24,25 for determining the deviation |#| . It is noted that the deviation |#| is determined in one plane, corresponding essentially to the magnitude of the yaw of the user's head 10. Pitch is not taken into account.
In one embodiment, the deviation can be determined by integrating signals from accelerometers, electronic compasses or gyroscopes comprised in the head-mounted device 9. In such an embodiment, a calibration step is carried out to determine where the reference direction 22, corresponding to the direction in which the user's head 10 is oriented in an essentially forward-looking direction, is located.
In another embodiment, one or more of the sensors 24,25 is provided to locate and determine the orientation of the part of the body to which the reference direction 22 is fixed and with which it moves. As an example, the location of the shoulders could be determined. More detailed examples of suitable methods are given in European patent application 09152769.7, filed on 13 February 2009 in the name of the present applicant (PHO 12962).
The tracking of the head position can be combined or replaced by the tracking of the eye position. Thus, the viewing direction 21 can be determined as a combination of the yaw angle of the head 10 and the eyes of the user 11. In general, tracking of the head will suffice. This would leave the user free to move his or her eyes to focus on different aspects of the video being rendered without any effects on how much of the environment is made visible.
Turning to Fig. 3, when the portable data processing device 1 has been commanded to render at least one of an audio signal, a video signal and an audiovisual signal, the portable data processing device 1 continually carries out a method comprising the step of determining the deviation |#| (step 26).
Optionally, the signal representative of the deviation |#| or the signal representative of the viewing direction 21, if that direction is determined separately from the reference direction 22 and then subtracted, is filtered (step 27). The filter is a filter arranged to reject rapidly varying and/or small variations in the deviation |#| . As an example, hysteresis can be applied, so that only sustained changes in deviation |#| are taken into account. Another type of filter would be a low-pass filter or similar integrating system. Then (step 28), a suitable ratio between the perceptibility of the environment and the reproduction of the audio and/or video signal is determined, and the settings of the devices 12,17,18 are adjusted (step 29). In principle, where the (filtered) viewing direction 21 essentially coincides with the reference direction 22, the environment of the user 11 is imperceptible to a maximum extent. In the case of the video signal, the reproduction of the video signal occupies essentially the entire available viewing area on a screen of the display device 8, and the light- admitting device 12 is controlled so as to admit no light for forming an image of the environment. Similarly, in the alternative embodiment illustrated in Fig. 1, the video signal from the camera 14 is not combined with the video signal comprised in the media signal being rendered. As far as the audio signal is concerned, the level of noise cancellation is set to maximum. Additionally, or alternatively, the volume of the sound from the speakers 17,18 can be set to a (local) maximum within the settings provided by the user 11. In yet a further embodiment, the portable data processing device 1 is configured to create an at least two-dimensional sound image, e.g. using head-related transfer functions. In that case, the apparent distance from the virtual sound source to the user 11 can be set to a minimum when the viewing direction 21 and the reference direction 22 coincide.
With increasing deviation |#| , more of the environment of the user is made perceptible. The relation between the ratio of perceptibility of the media signal and the environment can increase continuously or in discrete steps. The relation can be linear, but will generally be non-linear, such that the relative perceptibility of the environment increases more rapidly with the deviation |#| at larger deviations |#| . Thus, the user 11 turning his or her head 10 to one side in a definite manner need not turn very far for the environment to become dominant. The image of the environment is generally combined with the entire image rendered from the video signal obtained through the network interface 2, from a recording medium in the reader 3 or a file on the data storage device 6. Thus, in effect, the rendered video signal will become more or less transparent over the entire displayed area. Eye movements to obtain a clear view of the environment are therefore not necessary. In another embodiment, the environment may be represented in a, possibly more or less transparent, window overlaid on the video image, with the window increasing in at least one of size and opaqueness as the deviation |#| increases.
As far as the audio rendering is concerned, the level of active noise cancellation will decrease with increasing deviation |#| . The volume may also decrease. Where a soundscape is being created, the virtual source of sound will be caused to move further away from the user 11 with increasing deviation |#| .
As an option - illustrated in Fig. 3 - the portable data processing device 1 is arranged to monitor (step 30) the environment of the user 11 for events of a pre- determined type, in particular, the arrival of another person. In one embodiment, the wireless network interface 2 can be used to detect the arrival of another portable data processing device 1, e.g. one belonging to a particular designated other user, in the immediate vicinity of the user 11. If such an event is detected, then an audible and/or visible cue is provided (step 31). In one embodiment, this cue also gives directional information to the user 11. For example, an audible cue may be given via only one of left and right speakers 17,18. A visible cue may be given on the left or right side of an area of display provided by the display device 8,8'. Thus, where the user 11 is concentrating fully on the audio and/or video being rendered, he or she is given a cue to turn his or her head 10 to take account of the environment. Referring to Fig. 4, an alternative embodiment is sketched in outline. In this case, the method of Fig. 3 is carried out, but a different type of reference direction 22' is used. This reference direction 22 is fixed to at least part of the body of a user (not shown in Fig. 4) only in the sense that it relies on the user being positioned in a pre-determined orientation relative to speakers 17', 18' and a television set 32. A chair 33 is used as an example of a device for limiting the range of positions the user can assume, with the speakers 17', 18' being fixed to the chair 33. This embodiment, like the embodiment discussed in relation to Fig. 1, has the effect that the system need not determine the position of any objects external to the system. It need not be aware of what is in the room or how the room is laid out. All that is required is an arrangement for determining a viewing direction 21' of a user, the reference direction 22 being known and fixed by the system.
It should be noted that the above-mentioned embodiments illustrate, rather than limit, the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps other than those listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. In an alternative variant of the embodiment of Fig. 1, the head- mounted device 9, 9' is dispensed with, so that an audio-only variant results. In such a variant, the user need not operate any special controls when e.g. watching television and turning to someone else in the room. This other person need not be provided with any special beacons to enable the system to adjust the volume and/or level of noise cancellation in response to the user turning his or her head.
In another embodiment, a stationary video camera and image analysis is used to determine the deviation |#| of the user's viewing direction 21 from a reference direction fixed to at least part of the body of the user 11.

Claims

CLAIMS:
1. Method of controlling a system for rendering in perceptible form at least one media signal, including: obtaining at least one media signal; determining a deviation of a user's viewing direction from a reference direction; and causing at least one rendering device to render an obtained media signal in perceptible form through an associated modality, wherein the at least one rendering device is controlled to make an environment of the user relatively more perceptible through the same modality when the user's viewing direction deviates from the reference direction, wherein a direction generally fixed to at least part of a body of the user, at least when the user is positioned in a pre-determined orientation relative to at least one of the rendering devices, is used as the reference direction.
2. Method according to claim 1, wherein the at least one rendering device is controlled to make an environment of the user relatively more perceptible through the same modality with increasing deviation of the user's viewing direction from the reference direction, at least to a certain limit.
3. Method according to claim 1, wherein the reference direction is a direction in which the user's head is oriented in an essentially forward-looking direction relative to the user's trunk.
4. Method according to claim 1, wherein the deviation of a user's viewing direction from a reference direction is measured as a deviation from a reference direction fixed to and moving with at least part of the body of the user.
5. Method according to claim 1, wherein the at least one rendering device includes a head-mounted rendering device.
6. Method according to claim 5, wherein the at least one rendering device includes at least one head-mounted loudspeaker.
7. Method according to claim 6, wherein the at least one rendering device is comprised in a system for active noise cancellation, and wherein controlling at least one rendering device to make an environment of the user relatively more perceptible includes varying a level of active noise cancellation.
8. Method according to claim 1, including applying to a signal representative of at least one of the user's viewing direction and deviation of the user's viewing direction from a reference direction a filter for at least partially rejecting at least one of rapidly varying and small variations in the user's viewing direction, wherein the at least one rendering device is controlled on the basis of the filtered signal.
9. Method according to claim 1, including adding a perceptible cue to at least one reproduction of at least one media signal upon detecting an event of a pre- determined type in an environment of the user.
10. System for controlling the rendering in perceptible form of at least one media signal, including: an interface for obtaining at least one media signal; a system for determining a deviation of a user's viewing direction from a reference direction; and at least one rendering device for rendering an obtained media signal in perceptible form through an associated modality, the system controlling the at least one rendering device to make an environment of the user relatively more perceptible through the same modality when the user's viewing direction deviates from the reference direction, wherein the reference direction is a direction generally fixed to at least part of a body of the user, at least when the user is positioned in a pre-determined orientation relative to at least one of the rendering devices.
11. System according to claim 10, configured to carry out a method according to any one of claims 1- 9.
12. Apparatus for rendering a media signal in perceptible form through an associated modality, including an interface for obtaining the media signal and a control system according to claim 10 or 11.
13. Computer program including a set of instructions capable, when incorporated in a machine-readable medium, of causing a system having information processing capabilities to perform a method according to any one of claims 1-9.
PCT/IB2010/052882 2009-06-25 2010-06-24 Method and system for controlling the rendering of at least one media signal WO2010150220A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP09305604 2009-06-25
EP09305604.2 2009-06-25

Publications (1)

Publication Number Publication Date
WO2010150220A1 true WO2010150220A1 (en) 2010-12-29

Family

ID=42861123

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2010/052882 WO2010150220A1 (en) 2009-06-25 2010-06-24 Method and system for controlling the rendering of at least one media signal

Country Status (1)

Country Link
WO (1) WO2010150220A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5526022A (en) * 1993-01-06 1996-06-11 Virtual I/O, Inc. Sourceless orientation sensor
US20020044152A1 (en) 2000-10-16 2002-04-18 Abbott Kenneth H. Dynamic integration of computer generated and real world images
DE10255796A1 (en) * 2002-11-28 2004-06-17 Daimlerchrysler Ag Method and device for operating an optical display device
US20050231532A1 (en) * 2004-03-31 2005-10-20 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US6972734B1 (en) * 1999-06-11 2005-12-06 Canon Kabushiki Kaisha Mixed reality apparatus and mixed reality presentation method
US20060119576A1 (en) * 2004-12-06 2006-06-08 Naturalpoint, Inc. Systems and methods for using a movable object to control a computer
US20070081256A1 (en) 2004-02-18 2007-04-12 Icuiti Corporation Micro-Display Engine
EP1898634A2 (en) * 2006-09-08 2008-03-12 Sony Corporation Image capturing and displaying apparatus and image capturing and displaying method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5526022A (en) * 1993-01-06 1996-06-11 Virtual I/O, Inc. Sourceless orientation sensor
US6972734B1 (en) * 1999-06-11 2005-12-06 Canon Kabushiki Kaisha Mixed reality apparatus and mixed reality presentation method
US20020044152A1 (en) 2000-10-16 2002-04-18 Abbott Kenneth H. Dynamic integration of computer generated and real world images
DE10255796A1 (en) * 2002-11-28 2004-06-17 Daimlerchrysler Ag Method and device for operating an optical display device
US20070081256A1 (en) 2004-02-18 2007-04-12 Icuiti Corporation Micro-Display Engine
US20050231532A1 (en) * 2004-03-31 2005-10-20 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US20060119576A1 (en) * 2004-12-06 2006-06-08 Naturalpoint, Inc. Systems and methods for using a movable object to control a computer
EP1898634A2 (en) * 2006-09-08 2008-03-12 Sony Corporation Image capturing and displaying apparatus and image capturing and displaying method

Similar Documents

Publication Publication Date Title
JP7470164B2 (en) Interactive augmented or virtual reality devices
US9927948B2 (en) Image display apparatus and image display method
JP5228307B2 (en) Display device and display method
US20120207308A1 (en) Interactive sound playback device
US20150077416A1 (en) Head mounted display for viewing and creating a media file including omnidirectional image data and corresponding audio data
CN111788543A (en) Image enhancement device with gaze tracking
JP2008096868A (en) Imaging display device, and imaging display method
US11061466B2 (en) Apparatus and associated methods for presenting sensory scenes
JP2014115457A (en) Information processor and recording medium
JP6292658B2 (en) Head-mounted video display system and method, head-mounted video display program
CN117631307A (en) Hybrid perspective augmented reality system and method for low vision users
US20220066207A1 (en) Method and head-mounted unit for assisting a user
JP2020520576A5 (en)
CN109644235A (en) The technology of focus is set in mixed reality application
JP5664677B2 (en) Imaging display device and imaging display method
CN108628439A (en) Information processing equipment, information processing method and program
JP2013083994A (en) Display unit and display method
JP5971298B2 (en) Display device and display method
US20230336865A1 (en) Device, methods, and graphical user interfaces for capturing and displaying media
KR20110064084A (en) Glass apparatus for watching 3 dimension image, and method thereof
CN209311783U (en) Headset equipment
JP7040521B2 (en) Information processing equipment, information processing methods, and programs
WO2010150220A1 (en) Method and system for controlling the rendering of at least one media signal
US9778895B2 (en) Systems, devices, components and associated computer executable code for providing remote viewing of a display associated with a computational device
US20200211295A1 (en) Methods and devices for transitioning among realities mediated by augmented and/or virtual reality devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10740295

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10740295

Country of ref document: EP

Kind code of ref document: A1