US20070028264A1 - System and method for generating and distributing personalized media - Google Patents

System and method for generating and distributing personalized media Download PDF

Info

Publication number
US20070028264A1
US20070028264A1 US11/530,750 US53075006A US2007028264A1 US 20070028264 A1 US20070028264 A1 US 20070028264A1 US 53075006 A US53075006 A US 53075006A US 2007028264 A1 US2007028264 A1 US 2007028264A1
Authority
US
United States
Prior art keywords
clip
media
insert
master
personalized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/530,750
Inventor
Frederick Lowe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=37695854&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20070028264(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Priority claimed from US10/605,527 external-priority patent/US7123696B2/en
Application filed by Individual filed Critical Individual
Priority to US11/530,750 priority Critical patent/US20070028264A1/en
Publication of US20070028264A1 publication Critical patent/US20070028264A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17318Direct or substantially direct transmission and handling of requests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/02Arrangements for generating broadcast information; Arrangements for generating broadcast-related information with a direct linking to broadcast information or to broadcast space-time; Arrangements for simultaneous generation of broadcast information and broadcast-related information
    • H04H60/04Studio equipment; Interconnection of studios

Definitions

  • Embodiments of the invention pertain to the field of computer systems. More particularly, the invention is directed to a system and method for generating and distributing personalized media using computers.
  • Modem systems generate and utilize multimedia data in a plurality of different ways. For example, users can currently communicate information to and hear responses from systems that generate audio data and transmit that data back to the user over the telephone.
  • existing systems utilize a mapping between one form of data (e.g. numerical information or text data) and a set of audio files to generate an audio file for playback.
  • One common scenario where this occurs is when calling a bank to check bank account balances or transfer money. The system at the bank may, for example, obtain a user's account information via touchtone input and audibly playback that users account information for purposes of confirmation.
  • Existing systems for building and distributing such audio files use the input to map to a set of prerecorded audio tracks and assemble a message for playback. The end result is often times an awkward sounding message that fails to seamlessly integrate the prerecorded audio tracks. In addition to obvious transitions between clips, gaps of silence and intonation differences make this type of personalized media unagreeable to the human ear.
  • Existing solutions do not provide a way to generate an audio file that seamlessly integrates a plurality of audio files in a way that makes the generated file sound like an original recording with undetectable transitions, rather than a computer generated message.
  • current systems do not personalize the content of the generated audio file based on user information automatically obtained from the device or software program utilized to access the system and/or context information associated with the user. For example, current systems do not provide a mechanism for automatically generating and disseminating a personalized ring tones or ringback tones in real-time or in an asynchronous manner.
  • One or more embodiments of the invention are directed to a system and method for generating and distributing personalized media.
  • Personalized media is generated via one or more embodiments of the invention by selecting one or more master clips having predefined gaps, obtaining insert data (e.g., an insert clip), seamlessly merging the insert data into the selected master clip to generate a media clip with undetectable transitions between spliced clips.
  • Personalized media is distributed via one or more embodiments of the invention by optionally adding dynamic context data to the media clip, formatting the media clip for a given client device and transmitting the resulting media clip to a client device. This distribution process may be in real-time or delayed until the media clip is requested at a later time.
  • Embodiments of the invention may utilize any computing environment from single processor computing systems to highly optimized multi-threaded server processes comprising seamless splicing of compressed media or any other architecture capable of achieving the desired scalability.
  • An insert clip may contain any type of data. In most instances, however, the insert clip is utilized for purposes of adding variables such as a name, place, time, gender, product name or any other desirable information to a master clip.
  • the integration between the master clip and the insert clip is seamless meaning that there are no human detectable transitions between insertion points in the media clip. Regardless of the size of the insert clip the finished media clip lacks any noticeable gaps or intonation changes. Even though the media clip is generated using a plurality of different clips, the media clip sounds as if it was originally recorded in one take. Flash animation or other types of multimedia data such as video can be added to the media clip to enhance the user experience during playback.
  • An insert clip comprising dynamic context data may include for example a time, date, location, temperature or any other information not available until the time of delivery.
  • embodiments of the invention provide a mechanism for generating and distributing personalized media clips using what sounds like and/or is the voice of a celebrity. For instance, once the system merges one or more master clips together with one or more insert clips and thereby generates the media clip, the system can provide the media clip to a device and/or program for real-time playback or asynchronous pickup for later playback.
  • embodiments of the invention may use computer synthesized and/or TTS (text to speech) software of varying complexity in order to simulate voices.
  • Playback of the media clip initiates at a number of different types of devices and can be triggered by a multitude of different events.
  • Some examples of the types of playback devices used in accordance with one or more embodiments of the invention, include (but are not limited to) a computational device configured to access a network (e.g., the World Wide Web (WWW)) via a browser, an email client, or some other network interface.
  • WWW World Wide Web
  • a cell phone or any other type of portable or non-portable device (satellite, digital cable, and/or satellite radio) configured to output media clips (e.g., audio, video, etc. . . may also function as a playback device.
  • a cell phone may obtain a personalized ring tone, personalized ringback tone, or personalized media clip for use with regards to any cell phone function via a cell gateway server.
  • a cell phone itself may create a media clip locally for playing back personalized media in one or more embodiments of the invention during ring, ringback or at any other time. Any combination of generation of personalized media on a telephone server or locally is in keeping with the spirit of the invention.
  • playback devices may comprise for example a gasoline pump identifying a user via an RFID based device, such as SpeedPass®, a cash register comprising a credit card reader for identifying a customer, a slot machine or a hotel door comprising a hotel room key reader for identifying a guest, or any other device capable of identifying a user and playing back a media clip.
  • an RFID based device such as SpeedPass®
  • a cash register comprising a credit card reader for identifying a customer
  • a slot machine or a hotel door comprising a hotel room key reader for identifying a guest
  • any other device capable of identifying a user and playing back a media clip.
  • FIG. 1 illustrates an architectural view of an embodiment of the invention for generating and distributing personalized media.
  • FIG. 2 illustrates a method for generating personalized media in accordance with one or more embodiments of the invention.
  • FIG. 3 illustrates a method for distributing personalized media in accordance with one or more embodiments of the invention.
  • FIG. 4 illustrates a block diagram representing the elements of one or more media clips configured in accordance with one or more embodiments of the invention.
  • FIG. 5 illustrates a flow chart showing the interactions of the system components for a telephonic ring tone and/or ringback tone embodiments of the invention.
  • FIG. 6 illustrates an application comprising static clips comprising the master clip and insert clips otherwise known as tokens for insertion into slots or insert points of a given duration of the master clip.
  • One or more embodiments of the invention are directed to a system and method for generating and distributing personalized media.
  • Personalized media is generated via one or more embodiments of the invention by selecting one or more static clips comprising a master clip having predefined gaps or slots, obtaining insert data (e.g., an insert clip), seamlessly merging the insert data into the selected master clip to generate a media clip with undetectable transitions between spliced clips.
  • Personalized media is distributed via one or more embodiments of the invention by optionally adding dynamic context data to the media clip, formatting the media clip for a given client device and transmitting the resulting media clip to a client device. This distribution process may be in real-time or delayed until the media clip is requested at a later time.
  • Embodiments of the invention may utilize any computing environment from single processor computing systems to highly optimized multi-threaded server processes comprising seamless splicing of compressed media or any other architecture capable of achieving the desired scalability.
  • FIG. 6 shows an “application”, also known as a template specifying static segments that remain the same from application to application, slots or insert points with fixed or variable length durations with tokens or insert clips for insertion into the master clip.
  • application also known as a template specifying static segments that remain the same from application to application, slots or insert points with fixed or variable length durations with tokens or insert clips for insertion into the master clip.
  • An insert clip may contain any type of data. In most instances, however, the insert clip is utilized for purposes of adding variables such as a name, place, time, gender, product name or any other desirable information to a master clip.
  • the integration between the master clip and the insert clip is seamless meaning that there are no human detectable transitions between insertion points in the media clip. Regardless of the size of the insert clip the finished media clip lacks any noticeable gaps or intonation changes. Even though the media clip is generated using a plurality of different clips, the media clip sounds as if it was originally recorded in one take. Flash animation or other types of multimedia data such as video can be added to the media clip to enhance the user experience during playback.
  • An insert clip comprising dynamic context data may include for example a time, date, location, temperature or any other information not available until the time of delivery.
  • embodiments of the invention provide a mechanism for generating and distributing personalized media clips using what sounds like and/or is the voice of a celebrity. For instance, once the system merges one or more master clips together with one or more insert clips and thereby generates the media clip, the system can provide the media clip to a device and/or program for real-time playback or asynchronous pickup for later playback.
  • embodiments of the invention may use computer synthesized and/or TTS (text to speech) software of varying complexity in order to simulate voices.
  • a cell phone or any other type of portable or non-portable device configured to output media clips (e.g., audio, video, etc. . . ) may function as a playback device.
  • Embodiments of the invention may generate and distribute personalized ring tones and ringback tones based on caller identification and dialed caller party number.
  • An example media or ring clip could utilize a celebrity voice to announce “[user name] your [relative type] is calling”, where [user name] is the user's name spoken in the voice of a celebrity and [relative type] is selected from the list of ⁇ brother, mother, father, son, etc. ⁇ .
  • ringback tones comprising the sound played to the calling user when waiting for another user to answer the phone may be personalized based on the calling party, i.e., using the caller identification number.
  • the telephone server playing the ringback tone may query the personalized clip to play back utilizing the ANI and DNIS, i.e., caller identification and dialed called party number respectively.
  • the cell gateway (also known as a portal operator) itself or alternatively any other telephone computer system or server coupled with the cell gateway may generate the resulting ring tone that plays on the receiving cell phone when it is ringing, or the resulting ringback tone that plays on the initiating cell phone when waiting for another user to answer.
  • the cell gateway also known as a portal operator
  • any other telephone computer system or server coupled with the cell gateway may generate the resulting ring tone that plays on the receiving cell phone when it is ringing, or the resulting ringback tone that plays on the initiating cell phone when waiting for another user to answer.
  • generating the ring tones, ringback tones and messages with personalized variables locally on the cell phone itself is in keeping with the spirit of the invention.
  • An embodiment of the invention allows for an RFID based device, such as SpeedPass® to provide a unique identification to a RFID reader which in turn provides for a personalized message to be played back by a gas pump electronic interface unit, which in this case would be the playback device.
  • the gas station local server, or company main server may contain the personalized variable information.
  • the unique identification is presented to either server, the resulting output media clip may be constructed on either server and played on the gas pump electronic interface unit.
  • Bluetooth devices in the vehicle or coupled with the user may also play back the output media clip if the gas pump electronic interface unit is configured with WiFi or other wireless technologies configured to request media output.
  • Another embodiment of the invention enables a playback device such as a kiosk for purchasing plane tickets or groceries to identify and play personalized media messages to a user.
  • Additional examples of playback devices used in embodiments of the invention include loyalty card readers, ATM machines, GPS devices in planes and cars.
  • Hotel electronic doors are another example playback device where the insertion of an electronic key into the guest's door plays a message such as “Welcome [title] [user surname]” with title and user surname set to “Ms.” and “Smith” respectively in this example.
  • Playback devices may connect to embodiments of the invention comprising computational resources or if the playback device itself has enough computational power and storage comprising personalized information or can obtain the personalized information from an identifier associated with the user, may act as an embodiment of the invention in terms of constructing and playing the personalized media clip.
  • the hotel electronic door may comprise a network connection to the hotel's computing system. This connection may be wireless or wired.
  • the hotel computing system in this example may detect the electronic key or credit card-like magnetic key and determine the identification of the hotel guest.
  • the personalized message comprising the “Welcome [title] [user surname]” media clip would then be generated on the hotel's computing system, sent to the electronic door and played on small speaker constructed into the electronic door.
  • Another playback device may be a credit card reader configured to play back a personalized message to a shopper after the user identifies themselves with the credit card.
  • media output in this case may include a Flash animation with the user's name and an audio track with the phrase, “Welcome [user name], your current purchase is missing your [time period] buy of [product name]”, where [user name], [time period] and [product name] are insert clips that seamlessly combine with the master clip to create the output media clip.
  • Another example playback device may be a slot machine capable of identifying the user via credit card, RFID or hotel room key.
  • the slot machine could play a message such as “[User name], you just won [winning amount] dollars!”.
  • the slot machine may be networked to a server comprising the computational power and requisite personalization clips to create the output media clip or the slot machine itself may obtain an identifier associated with the user and construct the media clip itself.
  • Another example playback device may be a digital cable set-top box where personalization occurs on a cable system server and is sent to the IP address of the cable box or uses the subscriber ID in order to encode a message on a data channel.
  • Another example playback device may be a toy which may be personalized at the factory at on-line purchase time or at home through a network connection or through a wireless interface to a local computer with a network connection or configured to run as an embodiment of the invention.
  • the purchaser may choose the personalization clips that are to be inserted into the toy before shipping. For example, this would allow the toy to sound like a famous cartoon character and would arrive at the child preloaded.
  • network capable toys would be able to be dynamically loaded with personalized output media clips.
  • Toys containing processing units would be able to switch output media clips based on accelerometers that could be used in order to determine if the older or younger sibling was playing with the toy.
  • the toy may cry out, “[user name] be nice to me”, where [user name] would be the rougher of the two children in this example.
  • Context information may be used in this embodiment of the invention as set by the parent. Encryption may be utilized within the media clip holding portion of the device in order to prevent hackers from creating toys with unwanted sounds, words or gestures.
  • the time at which playback initiates depends upon the context of the device. Displaying a certain website, reading a particular email, calling a particular person, or being in a certain location are some of the examples of the different contexts that might trigger playback. These non-personal events or values may cause branching in determining what clips to splice together for final playback. For instance, a user of the system might initiate playback by visiting a certain web page (or some other type of online document or program) where the users will hear a personalized greeting from a celebrity. If, for example, the user visits an online bookstore, that user might receive a personal greeting from one of the user's favorite authors who then proceeds to promote his newest novel.
  • embodiments of the invention may take branching actions based on this context information.
  • Other examples include personalized messages via email, a cell phone or some other playback device.
  • a timer function or calendar function may initiate a media clip transmission.
  • Another example context function producing a asynchronous initiation of a media clip without user intervention may include a location context whereby a GPS receiver in a phone or car initiates a media message based on location. Any non-personalized information or information source may be used as a context source.
  • HTTP is a stateless protocol and connections are generated when needed by a requesting device, therefore, devices accessing embodiments of the invention over this protocol must employ different means in which to recognize asynchronous notification such as polling or maintaining an open connection over a separate communications protocol.
  • Embodiments of the invention would, for example, allow a manager to notify all members of his or her team in a personalized manner that there was a meeting on Monday, saving many phone messages.
  • the master clip could in this example could be recorded and saved on a cell phone with each persons name recorded on the cell phone as well.
  • Embodiments of the invention may contain software interfaces allowing the user to in effect produce the master clip by holding a given button when recording the master clip and assert another button when recording each variable insert clip. Alternatively, the user could simply access save bulk personalization messages and send them en masse when needed as in the case of staff meetings.
  • Embodiments of the invention may alternatively operate without manager intervention whereby the group to be invited to the staff meeting is contained within a server and a calendar function on a management server sends personalized media clips to the attendees a predetermined amount of time before the meeting.
  • the media clip may be generated and automatically transmitted when the user visits a particular web page.
  • the invention contemplates the use of a variety of different techniques for dynamically generating media clips.
  • the system obtains user information from a cookie file to instantaneously render a personalized multimedia file.
  • user data is already known by the system or obtained and confirmed via a log-in process.
  • embodiments of the invention may utilize a database of user information to assemble the media clip.
  • a content provider that wishes to distribute a media clip e.g., a personalized advertisement or some other personalized media clip
  • the system utilizes the request, which identifies or contains at least one master clip to be readied for playback and contains type information associated with each of the locations where insert clips are to be merged into the master clip.
  • the type information is then utilized to obtain user information from a system database and the user information is in turn used to obtain relevant insert clips for purposes of generating a media file.
  • a ring tone or a ringback tone may be generated via an embodiment of the invention and stored on a telephone company server for play when ringing a user, or for playing to a user that is ringing another user in the case of ringback tones.
  • the information in the system may be queried by ANI and DNIS, caller identification and dialed called party respectively.
  • FIG. 1 illustrates an architectural view of an embodiment of the invention for generating and distributing personalized media.
  • Embodiments of the invention provide a user such as sender 100 with a way to generate and distribute media clips to one or more other recipients such as receiver 107 .
  • sender 100 utilizes a client to connect with receiver 107 .
  • a client is typically a computing device capable of communicating through a network with one or more types of networks.
  • An example client as shown in this embodiment of the architecture is cell phone 101 or 106 .
  • the client may alternatively comprise a computing device such as a computer equipped with at least one processor, memory and storage media.
  • the computing device is equipped and configured to communicate using at least one network communication means.
  • a client may be equipped with a modem to communicate through (wire based or wave based wireless) telephone services or alternatively may be configured to communicate through one or more networking protocols such as Transmission Control Protocol (TCP) in combination with the Internet Protocol (IP) over the Internet.
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • Computing devices include cellular telephones, Personal Digital Assistants (PDA), desktop computers, laptop computers or any other electronic apparatus capable of communicating though a wire-based and/or wireless network may be utilized as a client.
  • a client may be a personal digital assistant equipped with a browser capable of rendering Hypertext Markup Language (HTML), a JAVA virtual machine capable of running applets received from a remote server, and any other computer program code that supports communication between the user and a remote machine.
  • Other applications allow the user to upload personal media clips such as an email client, data streaming service supported by the client, a HyperText Transport Protocol (HTTP) posting and any other means that allows a user to post media clips to media generation server 104 .
  • HTTP HyperText Transport Protocol
  • Client 106 (also referred to as a playback device) comprises media player.
  • a client 106 may be a cell phone that allows one or more users to access a media database 103 to play media clips.
  • Other types of multimedia destination clients may consist of a desktop computer equipped with a multimedia player, a personal digital assistant and any other electronic device capable of playing a media clip or allowing access to a network location that delivers media clips.
  • a media server is designed to handle access to and the delivery of media clips and typically is capable of handling communication to senders and receivers for purposes of delivering media.
  • An example of a media server is cell gateway 102 or 105 .
  • Other context information may be retrieved from a plurality of sources by the media server or media database (or other server coupled with the media database) such as calendar information, location information or any other information that may be utilized in inserting context information into a media clip.
  • Media server 105 is capable of connecting to third party servers (e.g., other websites), local or remote databases to collect context and/or media clips information.
  • Client 106 may also comprise a scheduler component in order to poll for media clips from media server 105 .
  • Systems embodying the invention may optionally utilize media generation engine 104 to process media clips. For example, after media server 105 determines the context and the master and insert clips to use for generating the output media clips, media server 105 may communicate that information to media generation engine 104 so media generation engine 104 can retrieve the data for the media clips from one or more storage locations in media database 103 . Media server 105 uses the input information to generate one or more media clips.
  • Media clip generation involves applying one or more processing algorithms to the input data. Typical processing involves merging/mixing, audio dubbing, inserting media clips and any other type of processing that takes one or more media clips and generating one or more new media clips based on context information.
  • Media server 105 may employ a highly optimized multi-threaded compressed media seamless splicing process in order to maximize the number of connections, network throughput and number of media clips that can be processed per media server 105 per unit time.
  • embodiments of the invention may employ a cache in order to further minimize the processing involved for repetitive access applications whereby each successive access avoids accessing media database 103 and the associated delays with accessing a database versus reading memory directly.
  • the cache may comprise a single concatenated media clip built out of the various static and insert clips, or may comprise the individual clips that are sent or constructed at real-time to produce the seamless output clip.
  • media database 103 is typically a commercial available or freeware relational database management system (RDBMS).
  • RDBMS relational database management system
  • Storage locations may also be any file system accessible locally or through a network or a memory cache.
  • Systems embodying the invention may comprise media production software components running on client 101 or any other client accessible by sender 100 , or on media generation engine 104 .
  • a media production system allows a user to utilize newly recorded media clips, or existing media clips to edit the media clips and prepare the media clips for usage with embodiments of the invention.
  • the production phase is disclosed below in further detail, and involves producing media clips properties, attributes and symbols to allow, at a later stage, the multimedia generation engine to combine a plurality of media clips to generate an output one or more media clips.
  • a separate production system may be utilized independent of media generation engine 104 . Regardless of the location that the production occurs, generation allows a producer to create clips using real life recording or computer generated media that include audio, video or any other electronic data format.
  • the system used for production allows users to generate master clips while saving insertion points, variable names for those insertion points and other attributes that associate the master clip with context information, and relationships between media clips.
  • FIG. 2 illustrates a method for generating personalized media in accordance with one or more embodiments of the invention.
  • the process starts at 200 .
  • the system obtains at least one master clip at 201 and obtains any associated insert clip(s) at 202 .
  • the system determines the points within the master clip(s) where the insert clip(s) are to be inserted.
  • the determination of the insert point(s) is accomplished by obtaining metadata detailing the location of the insert points or through any other method of describing the locations where the insert clips are to be inserted.
  • the master and insert clips are then seamlessly blended into a media clip in order to mask any transition points between the master and insert clips. This will be described more fully below.
  • the individual clips may be stored away for later access in a cache or database.
  • the media clip is to be transmitted immediately as determined at 205 , then the media clip is distributed at 207 (see FIG. 3 ). If the media clip is being created for an asynchronous pickup, for example as used in a ring tone or ringback tone for a cell phone, then the media clip is saved at 206 . The system then waits to start the process over at 200 when another media clip generation is requested.
  • the invention contemplates the use of many different interfaces (e.g., a web interface, email client, and/or any other type of device configured to execute playback of the media clip) there are some specific details and generalities associated with the use of each type of interface.
  • the web interface and/or email interface provides users with a way to access, through an interconnection fabric such as a computer network, one or more server sites.
  • the client and server system supports any type of network communication, including, but not limited to wireless networks, networking through telecommunications systems such as the phone system, optical networks and any other data transport mechanism that enables a client system to communicate with a server system.
  • the user interface also supports data streaming, as in the case of streaming multimedia data to a browser plug-in, a multimedia player, and/or any type of hardware device capable of playing multimedia data.
  • other embodiments of the invention may utilize web service interfaces, or may take advantage of peer-to-peer architectures for obtaining and splicing clips to one another and delivering them to one or a great number of users.
  • the user interface provides a mechanism for obtaining a unique identifier associated with each user that accesses the system. Any data item that uniquely identifies a user or device is referred to as a unique identifier.
  • the ANI and DNIS caller identification and dialed called party number respectively
  • the ANI and DNIS may be utilized to query media database 103 in FIG. 1 for a media clip unique to the desired sender and receiver. This for example may be utilized in order to play a certain piece of music, or personalized message to a particular caller, but not to a different caller.
  • the overlay of an audible standard telephone ring may be performed in order for the sender to comprehend that the receiver's phone is actually ringing and that they are not on hold.
  • a serial number and/or a user name and password can act as a unique identifier and thereby provide access to the system while restricting unauthorized access.
  • the unique identifier is a cookie file containing user information (e.g., user name, age, and any other information about the user) or a URL or pointer to the appropriate user information. Once the system obtains the cookie information, that information is used for purposes of rendering a personalized multimedia file.
  • the system can utilize the information contained within the cookie file to determine which insert clip to associate with a master clip for purposes of rendering the media clip.
  • embodiments of the invention are configured to selectively determine the content of the multimedia data based on user information such as a user type, and user preferences.
  • the system may obtain master clips, insert clips, and/or other multimedia clips from a variety of locations.
  • locations include database storage systems, data files, network locations, hard drives, optical storage devices and any medium capable of storing data including but not limited to network resources comprising web services and peer-to-peer networks.
  • the storage location is a relational database system.
  • a database system may hold the master clips and/or insert clips used to generate the media clips and/or a variety of other data or metadata associated with each media clip.
  • the data associated with the media clip allows for categorizing, classifying and searching media clips based on attributes.
  • metadata further comprises information about the clip including insert points, variable names at insert points, durations, and other items.
  • Database systems may be configured to index data in the database for purposes of expediting the process of searching for specific information in the database.
  • the database may comprise multiple mirrors to enable the system to scale up to handle a large number of concurrent users.
  • FIG. 3 illustrates a method for distributing personalized media in accordance with one or more embodiments of the invention starting at 300 .
  • a media clip is obtained at 301 from media database 103 as shown in FIG. 1 . If a media clip requires dynamic context data to be inserted before transmitting as determined at 302 , then the context clip is obtained at 303 . The insertion points for the context clip or clips is/are determined at 304 .
  • the context clip is seamlessly blended with the input media clip yielding a media clip comprising the context information at 305 . If the media clip comprises no context information as determined at 302 , or if context information is inserted then the media clip is formatted for the receiving playback device at 306 . This may comprise formatting the media clip for a given audio format for example.
  • the media clip is then transmitted to the client at 307 .
  • Context information may be obtained from any number of sources.
  • multimedia attributes may be obtained from a database system, time from a clock system, events information from a calendaring system, geographical information from a global positioning system and any other system capable of providing context information to embodiments of the invention.
  • Context information may combine attribute information and rule information to determine a means and time for initiating playback.
  • an event originating from a calendaring system may specify which delivery means to use for delivering the output media clip depending on time of the day, type of the event, events preceding (or succeeding) the event, or location of the user. If the user is online, playback may be via the web interface, or if the user is using email playback may be in the form of an email.
  • the playback may be redirected to a cellular phone.
  • the system may use other context attributes to determine exclusion rules between media clips. For example, insert media clips designed for use in certain contexts such as happy occasions, may only be used in some context categories and not others.
  • embodiments of the invention allow for providing an engine that may automatically handle tasks on behalf of persons.
  • FIG. 4 illustrates a block diagram representing the elements of one or more media clips configured in accordance with one or more embodiments of the invention.
  • the horizontal axis is in increasing time order from left to right.
  • Master clip 410 contains any type of multimedia data including, but not limited to, audio and/or video.
  • One or more master clips can be merged together to create a media clip ready for playback.
  • Insert clip 420 can also contain any type of data (e.g., audio, video, etc. . . ).
  • the system may combine two or more media clips to form either a master clip or insert clip so long as the clips have at least one property in common. For example, an audio clip may be merged with a video clip if the audio track included with the video clip has the same characteristics as the audio clip to be inserted.
  • the clips may be normalized before combining. Clips with different lengths may be front or back-end truncated or cross sampled faster or slow in order to fit the clip within the desired slot.
  • the master clip may contain metadata stating that the time slot to fit an insert clip into is not fixed, meaning that the clips can simply be concatenated one after the other since there may not be background sound information which would cause a non-seamless splice to occur. This can also be thought of as appending master clips back to back, for example if no fixed time gap was left in a given master clip and another clip such as an insert clip is to be appended before yet another master clip.
  • the independent clips are seamlessly spliced in order to produce an output clip that is perceived as a single recorded clip.
  • the location where the system interleaves insert clip 420 with one or more master clips 410 is marked by a start and end point, or start point and duration.
  • the insert clip is recorded to use the entire duration between the start and end point, thereby allowing the insert clip to sound or appear seamlessly integrated with the master clip.
  • Obtaining a master clip or insert clip may involve recording a live performance (e.g., a commercial or an artistic performance by a band), or capturing computer synthesized sounds.
  • a producer identifies the clips that are to become master clips and edits the clips or the voice track of a clip or clips in order to leave gaps for dropping one or more insert clips.
  • the producer may also input attributes to describe the sounds or the images in the media clips.
  • Some examples of data that may serve as attributes are text keywords and key phrases, a sound clip preview, an image preview or any other data format that may characterize a media clip.
  • the producer also determines among all available media clips those that are designed to be insert clips.
  • Insert clips are fashioned in embodiments of the invention to be inserted or mixed at one or more locations in one or more media clips (e.g., master clips).
  • media clips e.g., master clips
  • insert clips are artfully recorded to fill a predetermined duration of time. If a master clip leaves a gap of 3 seconds to place a person's name, the insert clip is recorded to fill up the entire 3 seconds.
  • the underlying music track seamlessly integrates the master clip together with the insert clip.
  • An insert clip may itself be a master clip, if the insert clip is designed for mixing with other media clips.
  • the system also provides a mechanism for associating insert clips with keywords, key phrases, sound preview, image preview and any other data format that allow the system to identify, classify, sort or other manipulate the insert clip for purposes of data management, this information is commonly known as metadata.
  • the producer marks the clip with insertion points.
  • the invention contemplates the use of various techniques for marking insertion point.
  • the system may, for instance, embed a signal having an identifiable pattern to mark a particular location in a master clip of other type of media clip. The signal is checked for when the system is looking for a location to place an insert clip.
  • Other approaches involve defining location information and storing the location information along with the media clips (e.g., in a database system) in the form of metadata associated with the clip.
  • the system may utilize a plurality of master clips that each begin and/or end at the point where an insert clip is to be placed.
  • the master clips are merged together with one or more appropriate insert clips the result is a seamless media clip ready for playback.
  • a song or some other type of recorded information is split into a set of compressed or uncompressed sequential files (e.g., WAV, AVI, MP3, OGG, etc. . .), certain files are identified as insert files, the voice track is removed from the insert files, and an insert clip is recorded over the insert file. This allows for the appearance of an original recording since the background music continues to play along while a vocally personalized or context associated phrase is inserted into the media clip.
  • the producer can create the insert clip by simply adding the appropriate voice data to the clip. In either case the master clips and insert clips are then merged together to create a finalized media clip.
  • the system may generate the media clip on the fly by integrating the appropriate master clips and insert clips together, or it may retrieve a previously created media clip from the database.
  • the producer of a media clip may define mixing and insertion properties.
  • the system may use such properties to define the way an insert clip is merged together with one or more master clips. For instance, properties may enable the system to know when to fade the master clip signal to allow for seamless integration of an insert clip and slowly return to normal after the insert clip completes.
  • the markings indicating the split and merge locations may be embedded codes or metadata stored separate from the clip.
  • the applications generated with embodiments of the invention reflect the flow of natural language. This is accomplished when a creator of the application writes at least one “generic” filler for every slot in the application and/or provides an alphabetic set of “generic” fillers for slots with highly variable information (e.g. name) and accounts for phonemic blending that occurs across closely annunciated phrases. If the writer has determined that certain phonemic blends will produce a “dirty” post-produced result, and has decided to include the content preceding and following a slot, the studio guide should incorporate the preceding and following content in every read of the variable. A “dirty” blend occurs whenever a phoneme (vowel or consonant utterance) crosses into the following word in natural language.
  • FIG. 5 illustrates a flow chart showing the interactions of the system components for a telephonic ring tone and/or ringback tone embodiments of the invention.
  • a user selects personalized content at 500 , this is shown in the block as a client computer on the left associated with a user visiting an operator portal such as a telephone company portal and selecting personalized content.
  • the operator portal communicates with the media server at 501 to determine what personalized applications are available and to obtain the associated variable names and to supply or receive any other information required, such as for example the bit rates desired for delivery to the specific target telephonic client device such as a particular cell phone.
  • the user configures the ring tone or ringback tone at 502 , this can involve selecting a particular celebrity voice to use for the personalized clip and filling out forms to account for the required insert clips, for example recipient name, caller name or any other variables required for generation of the resulting clip.
  • the user interacts with the operator portal in this step.
  • the user previews the clip at 503 when the operator portal redirects the user to a preview window to allow the user to access the clip from the media server.
  • the user confirms the transaction at 504 via the operator portal.
  • the operator portal retrieves the finished clip or clips and uses the clip for a ring tone, ringback tone or answering message as appropriate.
  • An example of an XML embodiment of an application is a single document may be utilized to describe the structure of the audio file, clip or clips to be delivered.
  • This value is passed in by a requestor and processed by the audio server to determine which application to serve.
  • the integer value used in the ID is system-unique.
  • Bitrate is an integer value.
  • ⁇ samplerate> The default output sample rate for the application.
  • Sample rate is in hertz.
  • ⁇ channels> The default output channel count.
  • the audio server currently supports mono and stereo output.
  • ⁇ segments> The number of pieces of audio used to assemble the file. Example: ⁇ segments count ”5”> Attributes: count.
  • This element is a parent element of the ⁇ audio> tag described below, and its attribute “count” corresponds to the number of ⁇ audio> tags it contains.
  • ⁇ audio> This element refers either to an audio file, or to a directory containing multiple audio files.
  • the id attribute is the file or directory name, and omits file extensions (i.e. “.wav”) and trailing slashes.
  • HTTP Element Override Function ⁇ mime-type>
  • enctype Overrides all of the default elements ⁇ bitrate>, for the application.
  • enctype pcm-8-8-1
  • this override may be used on an application whose deployed base audio is 16-bit, 22 KHz stereo, to deliver audio into a VoiceXML application that requires 8-bit, 8Khz audio.
  • This override causes the audio server to return additional, protocol- appropriate Content-Disposition headers, using the filename specified by the client. This is useful for tagging returned content requested VIA CURL and other popular HTTP packages.
  • a media server or media generation engine may comprise many different types of hardware.
  • An example of the type of hardware configuration may comprise Dell PowerEdge 2400 servers each with dual Pentium III Xeon processors with 512K L2 cache. Each server may be configured with 1 GB of main memory and 42GB (6 ⁇ 7GB) storage configured with software RAID 0+1. Alternatively the system may be deployed on higher density blade servers. Logically, systems that employ MPEG 3 encoding gain substantial performance benefits from faster processor speeds. Embodiments using seamless splicing of compressed formats may server higher numbers of users since the processing requirements of such implementations is significantly lower. This is due to the fact that encoding the output data does not have to occur on the entire output media clip as when raw insert clips are added to a raw master clip.
  • Some compressed formats allow frames to be inserted in the middle of other frames without altering portions of the preceding or succeeding frames.
  • These compression formats can be used in order to pre-encode master clips and pre-encode insert clips before splicing them together. This optimization can yield a two order of magnitude increase in numbers of users serviced per second versus a non-cached raw master and raw insert clip splice methodology and subsequent compression and network transmission.
  • Embodiments of the invention utilizing scalable architectures may transcode and/or compress the media clips resulting in a great reduction in network load offered by such compression.
  • the system may perform WAV to MPEG 3 transcoding using LAME or any other encoder capable of compressing data into formats required for output by embodiments of the invention. While this scheme dramatically increases audio quality and/or reduces network demand by a dramatic ratio (10:1), transcoding and compression place heavy load on the media generation engine or media server in the case of context clip blending.
  • Another embodiment of the invention utilizes an encoder with settings designed to allow for seamless splicing of compressed media. This eliminates the need for a layer of compression engines dedicated to compressing the media clips and creates a system that is approximately 300 times faster than a brute force WAVE to MPEG-3 media clip cache-less personalization system.
  • Seamlessly splicing media clips may be performed for certain media types.
  • Raw data types such as WAV, AIFF and AU format files are ordered in time without borrowing bits from preceding or succeeding frames and therefore may be sliced out and added in with prity.
  • Highly compressed formats may or may not allow for this type of manipulation of individual frames of data since highly compressed formats generally place data in easy-to-compress frames representing simple waveforms that should belong in a hard-to-compress frame. This interlacing of data makes the frames dependent upon one another.
  • MPEG-3 allows for compression with slight degradation of high end frequency spectrum by encoding frames to hold information only for the current frame. By setting the encoder to abandon the use of the bit reservoir and thereby degrading the frequency response slightly this is achieved.
  • variable bit rate encoding with overlapping encodes and achieve frame independence but the recordings must overlap in time. Since the gain in frequency response is minimal and the calculations and bit manipulations are more complex embodiments of the invention using constant bit rate encoding without the bit reservoir may be used in situations where maximum sound quality is not required, and situations where maximum sound quality is required may use variable bit rate encoding with the higher complexity bit manipulation algorithms involved.
  • the LAME encoder software produces various blank spots on the front and end of encoded clips due to algorithms used in order to decode the clips.
  • Certain encoders use MDCT/filterbank routines functionally similar to decoder routines and leave 528 sample delays at the front of encoded files.
  • seamless splice media clips may be created by clipping the first granule (576 bits) of the encoding insert clip encoding using LAME software which contains MDCT coefficients and eliminating the ID3 metadata from the file and the last 288 bits at the end of the insert clip.
  • LAME software which contains MDCT coefficients and eliminating the ID3 metadata from the file and the last 288 bits at the end of the insert clip.
  • the resulting media clip contains no front or back-end artifacts, metadata or data dependencies to hinder the independent insertion into a master clip.
  • a user utilizes the system embodying the invention to send customized messages (e.g., an invitation, advertisement, reminder, etc. .) to one or more other users (e.g., recipients).
  • a user may connect to a server and input a list of other users who are to receive the customized message.
  • the sending user may select a master clip for distribution and the system assembles a multimedia clip for distribution using the list of user information to identify an appropriate insert clip(s) to merge with the master clip.
  • the system is also capable of retrieving context information to determine the best communication path to reach the recipient and/or the recipient's availability.
  • the system may obtain other context information such as availability information, personal information (e.g.
  • the system utilizes the context information in several ways. For example, the system may send messages at different times depending on the distance between the residence location of each recipient and the location of the meeting. The system may also send the message using different transport mechanisms depending upon the whereabouts of the recipient. If the user is currently using the Internet, the system may elect to email the message. Otherwise the system may opt to transmit an audio message to a voicemail system or to contact the user by making a cellular phone call.
  • the system retrieves recipient information (e.g. first and last names, his/her title etc.), and assembles a multimedia clip appropriate for each user.
  • recipient information e.g. first and last names, his/her title etc.
  • the system may for instance, alter the language, gender, tone, or any other modifiable aspects of the voice track depending upon the characteristics of the user.
  • the system may also select an appropriate mechanism and format for the multimedia clip and thereby produces a multimedia clip specific to each user.
  • the personalization variables or insert variables are mapped to existing insert clips.
  • an application that comprises a name variable and the application enables a user to type in a name for a person in which to personalize a message
  • non-existent names are mapped to the closest possible name.
  • a personalized message for a user name “Joseph” may map to “Joe”.
  • decency standards may be applied for example when a user types in (or says) a word that is inappropriate for the insert clip, for example a curse word may be mapped to an acceptable word that is not offensive.
  • Using 1300 names in the United States covers 80% of the population, so mapping all of the rest of the names to a generic name such as “sir” or “madame” for names that are not found yields 80% coverage for a small processing price.
  • embodiments of the invention may be utilized by cell phone companies to obtain ring tones and ring back tones along with other personalized media, such as button vocalizations in a celebrity voice for example.
  • generation of personalized media occurs when a user purchases a personalized ringback tone for example, the generation of the media clip for a given incoming phone number for a given caller id may occur at any time before the clip is actually used.
  • the revenue generated from the sale of the personalized media clip comes from the telephone company which may charge their clients a different fee, or a fee for each use of the generated media clip.

Abstract

Personalized media is generated by obtaining a master clip having predefined insert points, obtaining an insert clip, seamlessly merging the insert clip into the selected master clip to generate a personalized media clip with undetectable transitions between spliced clips. An insert clip is utilized for purposes of adding variables such as a name, place, time, gender, product name or any other desirable information to a master clip. Dynamic context data may be added and transmitted to a client playback device. This distribution process may be in real-time or asynchronous until requested. The playback device may use the clip for ring tone, ringback tone or voice message or any other purpose. Although the contents of the master clip and/or the insert clip may use any voice including the originator's voice, celebrity voices or the voices of celebrity impersonators may be utilized. The master clip and insert clip may be seamlessly merged using uncompressed or compressed data in given formats.

Description

  • This application is a continuation of U.S. patent application Ser. No. 11/040,166, filed 1/20/2005 which is a continuation in part of U.S. patent application Ser. No. 10/605,527 filed Oct. 6, 2003 which takes benefit of U.S. Provisional Application Ser. No. 60/416,127 filed Oct. 4, 2002, the specifications all of which are hereby incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • Embodiments of the invention pertain to the field of computer systems. More particularly, the invention is directed to a system and method for generating and distributing personalized media using computers.
  • 2. Description of the Related Art
  • Modem systems generate and utilize multimedia data in a plurality of different ways. For example, users can currently communicate information to and hear responses from systems that generate audio data and transmit that data back to the user over the telephone. Typically, existing systems utilize a mapping between one form of data (e.g. numerical information or text data) and a set of audio files to generate an audio file for playback. One common scenario where this occurs is when calling a bank to check bank account balances or transfer money. The system at the bank may, for example, obtain a user's account information via touchtone input and audibly playback that users account information for purposes of confirmation. Existing systems for building and distributing such audio files use the input to map to a set of prerecorded audio tracks and assemble a message for playback. The end result is often times an awkward sounding message that fails to seamlessly integrate the prerecorded audio tracks. In addition to obvious transitions between clips, gaps of silence and intonation differences make this type of personalized media unagreeable to the human ear.
  • Existing solutions do not provide a way to generate an audio file that seamlessly integrates a plurality of audio files in a way that makes the generated file sound like an original recording with undetectable transitions, rather than a computer generated message. Moreover, current systems do not personalize the content of the generated audio file based on user information automatically obtained from the device or software program utilized to access the system and/or context information associated with the user. For example, current systems do not provide a mechanism for automatically generating and disseminating a personalized ring tones or ringback tones in real-time or in an asynchronous manner.
  • As a result of these limitations and others there is a need for a system and method for generating and distributing personalized media that is capable of seamlessly integrating media clips for real-time and asynchronous distribution.
  • BRIEF SUMMARY OF THE INVENTION
  • One or more embodiments of the invention are directed to a system and method for generating and distributing personalized media. Personalized media is generated via one or more embodiments of the invention by selecting one or more master clips having predefined gaps, obtaining insert data (e.g., an insert clip), seamlessly merging the insert data into the selected master clip to generate a media clip with undetectable transitions between spliced clips. Personalized media is distributed via one or more embodiments of the invention by optionally adding dynamic context data to the media clip, formatting the media clip for a given client device and transmitting the resulting media clip to a client device. This distribution process may be in real-time or delayed until the media clip is requested at a later time. Embodiments of the invention may utilize any computing environment from single processor computing systems to highly optimized multi-threaded server processes comprising seamless splicing of compressed media or any other architecture capable of achieving the desired scalability.
  • An insert clip may contain any type of data. In most instances, however, the insert clip is utilized for purposes of adding variables such as a name, place, time, gender, product name or any other desirable information to a master clip. The integration between the master clip and the insert clip is seamless meaning that there are no human detectable transitions between insertion points in the media clip. Regardless of the size of the insert clip the finished media clip lacks any noticeable gaps or intonation changes. Even though the media clip is generated using a plurality of different clips, the media clip sounds as if it was originally recorded in one take. Flash animation or other types of multimedia data such as video can be added to the media clip to enhance the user experience during playback. An insert clip comprising dynamic context data may include for example a time, date, location, temperature or any other information not available until the time of delivery.
  • Although the contents of the master clip and/or the insert clip may use any voice including the originator's voice, on many occasions celebrity voices or the voices of celebrity impersonators are utilized. The master clip, for instance, might be recorded by the celebrity and the insert clip recorded using a voice over artist. Thus, embodiments of the invention provide a mechanism for generating and distributing personalized media clips using what sounds like and/or is the voice of a celebrity. For instance, once the system merges one or more master clips together with one or more insert clips and thereby generates the media clip, the system can provide the media clip to a device and/or program for real-time playback or asynchronous pickup for later playback. In addition, embodiments of the invention may use computer synthesized and/or TTS (text to speech) software of varying complexity in order to simulate voices.
  • Playback of the media clip initiates at a number of different types of devices and can be triggered by a multitude of different events. Some examples of the types of playback devices (also known herein as destination clients) used in accordance with one or more embodiments of the invention, include (but are not limited to) a computational device configured to access a network (e.g., the World Wide Web (WWW)) via a browser, an email client, or some other network interface. A cell phone or any other type of portable or non-portable device (satellite, digital cable, and/or satellite radio) configured to output media clips (e.g., audio, video, etc. . . may also function as a playback device. A cell phone may obtain a personalized ring tone, personalized ringback tone, or personalized media clip for use with regards to any cell phone function via a cell gateway server. Alternatively, a cell phone itself may create a media clip locally for playing back personalized media in one or more embodiments of the invention during ring, ringback or at any other time. Any combination of generation of personalized media on a telephone server or locally is in keeping with the spirit of the invention.
  • Other types of playback devices may comprise for example a gasoline pump identifying a user via an RFID based device, such as SpeedPass®, a cash register comprising a credit card reader for identifying a customer, a slot machine or a hotel door comprising a hotel room key reader for identifying a guest, or any other device capable of identifying a user and playing back a media clip.
  • DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an architectural view of an embodiment of the invention for generating and distributing personalized media.
  • FIG. 2 illustrates a method for generating personalized media in accordance with one or more embodiments of the invention.
  • FIG. 3 illustrates a method for distributing personalized media in accordance with one or more embodiments of the invention.
  • FIG. 4 illustrates a block diagram representing the elements of one or more media clips configured in accordance with one or more embodiments of the invention.
  • FIG. 5 illustrates a flow chart showing the interactions of the system components for a telephonic ring tone and/or ringback tone embodiments of the invention.
  • FIG. 6 illustrates an application comprising static clips comprising the master clip and insert clips otherwise known as tokens for insertion into slots or insert points of a given duration of the master clip.
  • DETAILED DESCRIPTION OF THE INVENTION
  • In the following description, numerous specific details are set forth to provide a more thorough description of embodiments of the invention. It will be apparent, however, to one skilled in the art, that the invention may be practiced without these specific details. In other instances, well known features have not been described in detail so as not to obscure the invention.
  • One or more embodiments of the invention are directed to a system and method for generating and distributing personalized media. Personalized media is generated via one or more embodiments of the invention by selecting one or more static clips comprising a master clip having predefined gaps or slots, obtaining insert data (e.g., an insert clip), seamlessly merging the insert data into the selected master clip to generate a media clip with undetectable transitions between spliced clips. Personalized media is distributed via one or more embodiments of the invention by optionally adding dynamic context data to the media clip, formatting the media clip for a given client device and transmitting the resulting media clip to a client device. This distribution process may be in real-time or delayed until the media clip is requested at a later time. Embodiments of the invention may utilize any computing environment from single processor computing systems to highly optimized multi-threaded server processes comprising seamless splicing of compressed media or any other architecture capable of achieving the desired scalability.
  • FIG. 6 shows an “application”, also known as a template specifying static segments that remain the same from application to application, slots or insert points with fixed or variable length durations with tokens or insert clips for insertion into the master clip.
  • An insert clip may contain any type of data. In most instances, however, the insert clip is utilized for purposes of adding variables such as a name, place, time, gender, product name or any other desirable information to a master clip. The integration between the master clip and the insert clip is seamless meaning that there are no human detectable transitions between insertion points in the media clip. Regardless of the size of the insert clip the finished media clip lacks any noticeable gaps or intonation changes. Even though the media clip is generated using a plurality of different clips, the media clip sounds as if it was originally recorded in one take. Flash animation or other types of multimedia data such as video can be added to the media clip to enhance the user experience during playback. An insert clip comprising dynamic context data may include for example a time, date, location, temperature or any other information not available until the time of delivery.
  • Although the contents of the master clip and/or the insert clip may use any voice including the originator's voice, on many occasions celebrity voices or the voices of celebrity impersonators are utilized. The master clip, for instance, might be recorded by the celebrity and the insert clip recorded using a voice over artist. Thus, embodiments of the invention provide a mechanism for generating and distributing personalized media clips using what sounds like and/or is the voice of a celebrity. For instance, once the system merges one or more master clips together with one or more insert clips and thereby generates the media clip, the system can provide the media clip to a device and/or program for real-time playback or asynchronous pickup for later playback. In addition, embodiments of the invention may use computer synthesized and/or TTS (text to speech) software of varying complexity in order to simulate voices.
  • A cell phone or any other type of portable or non-portable device (satellite, digital cable, and/or satellite radio) configured to output media clips (e.g., audio, video, etc. . . ) may function as a playback device. Embodiments of the invention may generate and distribute personalized ring tones and ringback tones based on caller identification and dialed caller party number. An example media or ring clip could utilize a celebrity voice to announce “[user name] your [relative type] is calling”, where [user name] is the user's name spoken in the voice of a celebrity and [relative type] is selected from the list of {brother, mother, father, son, etc.} . In addition, ringback tones comprising the sound played to the calling user when waiting for another user to answer the phone may be personalized based on the calling party, i.e., using the caller identification number. The telephone server playing the ringback tone may query the personalized clip to play back utilizing the ANI and DNIS, i.e., caller identification and dialed called party number respectively.
  • In telephonic related embodiments of the invention, the cell gateway (also known as a portal operator) itself or alternatively any other telephone computer system or server coupled with the cell gateway may generate the resulting ring tone that plays on the receiving cell phone when it is ringing, or the resulting ringback tone that plays on the initiating cell phone when waiting for another user to answer. Alternatively, generating the ring tones, ringback tones and messages with personalized variables locally on the cell phone itself is in keeping with the spirit of the invention.
  • An embodiment of the invention allows for an RFID based device, such as SpeedPass® to provide a unique identification to a RFID reader which in turn provides for a personalized message to be played back by a gas pump electronic interface unit, which in this case would be the playback device. In this embodiment of the invention, the gas station local server, or company main server may contain the personalized variable information. When the unique identification is presented to either server, the resulting output media clip may be constructed on either server and played on the gas pump electronic interface unit. Bluetooth devices in the vehicle or coupled with the user may also play back the output media clip if the gas pump electronic interface unit is configured with WiFi or other wireless technologies configured to request media output.
  • Another embodiment of the invention enables a playback device such as a kiosk for purchasing plane tickets or groceries to identify and play personalized media messages to a user. Additional examples of playback devices used in embodiments of the invention include loyalty card readers, ATM machines, GPS devices in planes and cars. Hotel electronic doors are another example playback device where the insertion of an electronic key into the guest's door plays a message such as “Welcome [title] [user surname]” with title and user surname set to “Ms.” and “Smith” respectively in this example. Playback devices may connect to embodiments of the invention comprising computational resources or if the playback device itself has enough computational power and storage comprising personalized information or can obtain the personalized information from an identifier associated with the user, may act as an embodiment of the invention in terms of constructing and playing the personalized media clip. In this example, the hotel electronic door may comprise a network connection to the hotel's computing system. This connection may be wireless or wired. The hotel computing system in this example may detect the electronic key or credit card-like magnetic key and determine the identification of the hotel guest. The personalized message comprising the “Welcome [title] [user surname]” media clip would then be generated on the hotel's computing system, sent to the electronic door and played on small speaker constructed into the electronic door.
  • Another playback device may be a credit card reader configured to play back a personalized message to a shopper after the user identifies themselves with the credit card. For example, media output in this case may include a Flash animation with the user's name and an audio track with the phrase, “Welcome [user name], your current purchase is missing your [time period] buy of [product name]”, where [user name], [time period] and [product name] are insert clips that seamlessly combine with the master clip to create the output media clip.
  • Another example playback device may be a slot machine capable of identifying the user via credit card, RFID or hotel room key. The slot machine could play a message such as “[User name], you just won [winning amount] dollars!”. In this example, the slot machine may be networked to a server comprising the computational power and requisite personalization clips to create the output media clip or the slot machine itself may obtain an identifier associated with the user and construct the media clip itself.
  • Another example playback device may be a digital cable set-top box where personalization occurs on a cable system server and is sent to the IP address of the cable box or uses the subscriber ID in order to encode a message on a data channel.
  • Another example playback device may be a toy which may be personalized at the factory at on-line purchase time or at home through a network connection or through a wireless interface to a local computer with a network connection or configured to run as an embodiment of the invention. In the case of internet shopping, the purchaser may choose the personalization clips that are to be inserted into the toy before shipping. For example, this would allow the toy to sound like a famous cartoon character and would arrive at the child preloaded. With inexpensive network devices available, network capable toys would be able to be dynamically loaded with personalized output media clips. Toys containing processing units would be able to switch output media clips based on accelerometers that could be used in order to determine if the older or younger sibling was playing with the toy. For example, the toy may cry out, “[user name] be nice to me”, where [user name] would be the rougher of the two children in this example. Context information may be used in this embodiment of the invention as set by the parent. Encryption may be utilized within the media clip holding portion of the device in order to prevent hackers from creating toys with unwanted sounds, words or gestures.
  • In at least one embodiment of the invention, the time at which playback initiates depends upon the context of the device. Displaying a certain website, reading a particular email, calling a particular person, or being in a certain location are some of the examples of the different contexts that might trigger playback. These non-personal events or values may cause branching in determining what clips to splice together for final playback. For instance, a user of the system might initiate playback by visiting a certain web page (or some other type of online document or program) where the users will hear a personalized greeting from a celebrity. If, for example, the user visits an online bookstore, that user might receive a personal greeting from one of the user's favorite authors who then proceeds to promote his newest novel. If the context information associated with the time of day for example would indicate that a different master clip should be played, i.e., shorter clips from the author in the morning than at night, then embodiments of the invention may take branching actions based on this context information. Other examples include personalized messages via email, a cell phone or some other playback device. In addition, a timer function or calendar function may initiate a media clip transmission. Another example context function producing a asynchronous initiation of a media clip without user intervention may include a location context whereby a GPS receiver in a phone or car initiates a media message based on location. Any non-personalized information or information source may be used as a context source. HTTP is a stateless protocol and connections are generated when needed by a requesting device, therefore, devices accessing embodiments of the invention over this protocol must employ different means in which to recognize asynchronous notification such as polling or maintaining an open connection over a separate communications protocol.
  • Other embodiments of the invention would, for example, allow a manager to notify all members of his or her team in a personalized manner that there was a meeting on Monday, saving many phone messages. The master clip could in this example could be recorded and saved on a cell phone with each persons name recorded on the cell phone as well. Embodiments of the invention may contain software interfaces allowing the user to in effect produce the master clip by holding a given button when recording the master clip and assert another button when recording each variable insert clip. Alternatively, the user could simply access save bulk personalization messages and send them en masse when needed as in the case of staff meetings. Embodiments of the invention may alternatively operate without manager intervention whereby the group to be invited to the staff meeting is contained within a server and a calendar function on a management server sends personalized media clips to the attendees a predetermined amount of time before the meeting.
  • If the media clip is distributed via the WWW, the media clip may be generated and automatically transmitted when the user visits a particular web page. The invention contemplates the use of a variety of different techniques for dynamically generating media clips. In one embodiment, the system obtains user information from a cookie file to instantaneously render a personalized multimedia file. In other instances user data is already known by the system or obtained and confirmed via a log-in process.
  • If the media clip is to be distributed via electronic mail, cellular telephone, or some other telecommunication mechanism, embodiments of the invention may utilize a database of user information to assemble the media clip. A content provider that wishes to distribute a media clip (e.g., a personalized advertisement or some other personalized media clip) could provide a request to the system for processing. The system utilizes the request, which identifies or contains at least one master clip to be readied for playback and contains type information associated with each of the locations where insert clips are to be merged into the master clip. The type information is then utilized to obtain user information from a system database and the user information is in turn used to obtain relevant insert clips for purposes of generating a media file. Once the insert clips are obtained the system merges them together with the master clip and distributes the completed media clip to the user via email or some other distribution means. In the case of cell phones, a ring tone or a ringback tone may be generated via an embodiment of the invention and stored on a telephone company server for play when ringing a user, or for playing to a user that is ringing another user in the case of ringback tones. The information in the system may be queried by ANI and DNIS, caller identification and dialed called party respectively.
  • FIG. 1 illustrates an architectural view of an embodiment of the invention for generating and distributing personalized media. Embodiments of the invention provide a user such as sender 100 with a way to generate and distribute media clips to one or more other recipients such as receiver 107. The reader should note that the term user, sender and receiver as used herein refers to a person using an embodiment of the invention and/or to processes such as computer applications that are programmed to run at specific times and execute programmed tasks. Typically, sender 100 utilizes a client to connect with receiver 107. A client is typically a computing device capable of communicating through a network with one or more types of networks. An example client as shown in this embodiment of the architecture is cell phone 101 or 106. The client may alternatively comprise a computing device such as a computer equipped with at least one processor, memory and storage media. The computing device is equipped and configured to communicate using at least one network communication means. For example, a client may be equipped with a modem to communicate through (wire based or wave based wireless) telephone services or alternatively may be configured to communicate through one or more networking protocols such as Transmission Control Protocol (TCP) in combination with the Internet Protocol (IP) over the Internet.
  • Computing devices include cellular telephones, Personal Digital Assistants (PDA), desktop computers, laptop computers or any other electronic apparatus capable of communicating though a wire-based and/or wireless network may be utilized as a client. For example, a client may be a personal digital assistant equipped with a browser capable of rendering Hypertext Markup Language (HTML), a JAVA virtual machine capable of running applets received from a remote server, and any other computer program code that supports communication between the user and a remote machine. Other applications allow the user to upload personal media clips such as an email client, data streaming service supported by the client, a HyperText Transport Protocol (HTTP) posting and any other means that allows a user to post media clips to media generation server 104.
  • Client 106 (also referred to as a playback device) comprises media player. For example, a client 106 may be a cell phone that allows one or more users to access a media database 103 to play media clips. Other types of multimedia destination clients may consist of a desktop computer equipped with a multimedia player, a personal digital assistant and any other electronic device capable of playing a media clip or allowing access to a network location that delivers media clips.
  • A media server is designed to handle access to and the delivery of media clips and typically is capable of handling communication to senders and receivers for purposes of delivering media. An example of a media server is cell gateway 102 or 105. Other context information may be retrieved from a plurality of sources by the media server or media database (or other server coupled with the media database) such as calendar information, location information or any other information that may be utilized in inserting context information into a media clip.
  • Media server 105 is capable of connecting to third party servers (e.g., other websites), local or remote databases to collect context and/or media clips information. Client 106 may also comprise a scheduler component in order to poll for media clips from media server 105.
  • Systems embodying the invention may optionally utilize media generation engine 104 to process media clips. For example, after media server 105 determines the context and the master and insert clips to use for generating the output media clips, media server 105 may communicate that information to media generation engine 104 so media generation engine 104 can retrieve the data for the media clips from one or more storage locations in media database 103. Media server 105 uses the input information to generate one or more media clips.
  • Media clip generation involves applying one or more processing algorithms to the input data. Typical processing involves merging/mixing, audio dubbing, inserting media clips and any other type of processing that takes one or more media clips and generating one or more new media clips based on context information. Media server 105 may employ a highly optimized multi-threaded compressed media seamless splicing process in order to maximize the number of connections, network throughput and number of media clips that can be processed per media server 105 per unit time. Furthermore, embodiments of the invention may employ a cache in order to further minimize the processing involved for repetitive access applications whereby each successive access avoids accessing media database 103 and the associated delays with accessing a database versus reading memory directly. The cache may comprise a single concatenated media clip built out of the various static and insert clips, or may comprise the individual clips that are sent or constructed at real-time to produce the seamless output clip.
  • In embodiments of the invention, media database 103 is typically a commercial available or freeware relational database management system (RDBMS). Storage locations may also be any file system accessible locally or through a network or a memory cache.
  • Systems embodying the invention may comprise media production software components running on client 101 or any other client accessible by sender 100, or on media generation engine 104. Typically a media production system allows a user to utilize newly recorded media clips, or existing media clips to edit the media clips and prepare the media clips for usage with embodiments of the invention. The production phase is disclosed below in further detail, and involves producing media clips properties, attributes and symbols to allow, at a later stage, the multimedia generation engine to combine a plurality of media clips to generate an output one or more media clips. A separate production system may be utilized independent of media generation engine 104. Regardless of the location that the production occurs, generation allows a producer to create clips using real life recording or computer generated media that include audio, video or any other electronic data format. The system used for production allows users to generate master clips while saving insertion points, variable names for those insertion points and other attributes that associate the master clip with context information, and relationships between media clips.
  • FIG. 2 illustrates a method for generating personalized media in accordance with one or more embodiments of the invention. The process starts at 200. The system obtains at least one master clip at 201 and obtains any associated insert clip(s) at 202. The system then determines the points within the master clip(s) where the insert clip(s) are to be inserted. The determination of the insert point(s) is accomplished by obtaining metadata detailing the location of the insert points or through any other method of describing the locations where the insert clips are to be inserted. The master and insert clips are then seamlessly blended into a media clip in order to mask any transition points between the master and insert clips. This will be described more fully below. Optionally the individual clips may be stored away for later access in a cache or database. If the media clip is to be transmitted immediately as determined at 205, then the media clip is distributed at 207 (see FIG. 3). If the media clip is being created for an asynchronous pickup, for example as used in a ring tone or ringback tone for a cell phone, then the media clip is saved at 206. The system then waits to start the process over at 200 when another media clip generation is requested.
  • Although the invention contemplates the use of many different interfaces (e.g., a web interface, email client, and/or any other type of device configured to execute playback of the media clip) there are some specific details and generalities associated with the use of each type of interface. For instance, the web interface and/or email interface provides users with a way to access, through an interconnection fabric such as a computer network, one or more server sites. To this end the client and server system supports any type of network communication, including, but not limited to wireless networks, networking through telecommunications systems such as the phone system, optical networks and any other data transport mechanism that enables a client system to communicate with a server system. The user interface also supports data streaming, as in the case of streaming multimedia data to a browser plug-in, a multimedia player, and/or any type of hardware device capable of playing multimedia data. In addition, other embodiments of the invention may utilize web service interfaces, or may take advantage of peer-to-peer architectures for obtaining and splicing clips to one another and delivering them to one or a great number of users.
  • In accordance with one or more embodiments of the invention, the user interface provides a mechanism for obtaining a unique identifier associated with each user that accesses the system. Any data item that uniquely identifies a user or device is referred to as a unique identifier. For embodiments of the invention directed to telephonic use, the ANI and DNIS (caller identification and dialed called party number respectively) may be utilized to query media database 103 in FIG. 1 for a media clip unique to the desired sender and receiver. This for example may be utilized in order to play a certain piece of music, or personalized message to a particular caller, but not to a different caller. In addition to playing a personalized message or piece of music in the case of a ringback tone, the overlay of an audible standard telephone ring may be performed in order for the sender to comprehend that the receiver's phone is actually ringing and that they are not on hold. In other embodiments of the invention a serial number and/or a user name and password can act as a unique identifier and thereby provide access to the system while restricting unauthorized access. In at least one implementation of the invention the unique identifier is a cookie file containing user information (e.g., user name, age, and any other information about the user) or a URL or pointer to the appropriate user information. Once the system obtains the cookie information, that information is used for purposes of rendering a personalized multimedia file. For instance, the system can utilize the information contained within the cookie file to determine which insert clip to associate with a master clip for purposes of rendering the media clip. By identifying users, embodiments of the invention are configured to selectively determine the content of the multimedia data based on user information such as a user type, and user preferences.
  • The system may obtain master clips, insert clips, and/or other multimedia clips from a variety of locations. Such locations include database storage systems, data files, network locations, hard drives, optical storage devices and any medium capable of storing data including but not limited to network resources comprising web services and peer-to-peer networks. In an embodiment of the invention, the storage location is a relational database system. A database system may hold the master clips and/or insert clips used to generate the media clips and/or a variety of other data or metadata associated with each media clip. The data associated with the media clip allows for categorizing, classifying and searching media clips based on attributes. In addition, metadata further comprises information about the clip including insert points, variable names at insert points, durations, and other items. Database systems may be configured to index data in the database for purposes of expediting the process of searching for specific information in the database. The database may comprise multiple mirrors to enable the system to scale up to handle a large number of concurrent users.
  • FIG. 3 illustrates a method for distributing personalized media in accordance with one or more embodiments of the invention starting at 300. A media clip is obtained at 301 from media database 103 as shown in FIG. 1. If a media clip requires dynamic context data to be inserted before transmitting as determined at 302, then the context clip is obtained at 303. The insertion points for the context clip or clips is/are determined at 304. The context clip is seamlessly blended with the input media clip yielding a media clip comprising the context information at 305. If the media clip comprises no context information as determined at 302, or if context information is inserted then the media clip is formatted for the receiving playback device at 306. This may comprise formatting the media clip for a given audio format for example. The media clip is then transmitted to the client at 307.
  • Context information may be obtained from any number of sources. For example, multimedia attributes may be obtained from a database system, time from a clock system, events information from a calendaring system, geographical information from a global positioning system and any other system capable of providing context information to embodiments of the invention. Context information may combine attribute information and rule information to determine a means and time for initiating playback. For example, an event originating from a calendaring system may specify which delivery means to use for delivering the output media clip depending on time of the day, type of the event, events preceding (or succeeding) the event, or location of the user. If the user is online, playback may be via the web interface, or if the user is using email playback may be in the form of an email. If the user is not actively involved in these activities at playback time, the playback may be redirected to a cellular phone. The system may use other context attributes to determine exclusion rules between media clips. For example, insert media clips designed for use in certain contexts such as happy occasions, may only be used in some context categories and not others. By using intelligent tools to interpret context rules, embodiments of the invention allow for providing an engine that may automatically handle tasks on behalf of persons.
  • FIG. 4 illustrates a block diagram representing the elements of one or more media clips configured in accordance with one or more embodiments of the invention. The horizontal axis is in increasing time order from left to right. Master clip 410 contains any type of multimedia data including, but not limited to, audio and/or video. One or more master clips can be merged together to create a media clip ready for playback. Insert clip 420 can also contain any type of data (e.g., audio, video, etc. . . ). The system may combine two or more media clips to form either a master clip or insert clip so long as the clips have at least one property in common. For example, an audio clip may be merged with a video clip if the audio track included with the video clip has the same characteristics as the audio clip to be inserted. If the clips have a mismatch in sampling rate or format, they may be normalized before combining. Clips with different lengths may be front or back-end truncated or cross sampled faster or slow in order to fit the clip within the desired slot. Alternatively, the master clip may contain metadata stating that the time slot to fit an insert clip into is not fixed, meaning that the clips can simply be concatenated one after the other since there may not be background sound information which would cause a non-seamless splice to occur. This can also be thought of as appending master clips back to back, for example if no fixed time gap was left in a given master clip and another clip such as an insert clip is to be appended before yet another master clip. Regardless of the nomenclature, the idea is that the independent clips are seamlessly spliced in order to produce an output clip that is perceived as a single recorded clip. The location where the system interleaves insert clip 420 with one or more master clips 410 is marked by a start and end point, or start point and duration. The insert clip is recorded to use the entire duration between the start and end point, thereby allowing the insert clip to sound or appear seamlessly integrated with the master clip.
  • Obtaining a master clip or insert clip may involve recording a live performance (e.g., a commercial or an artistic performance by a band), or capturing computer synthesized sounds. A producer identifies the clips that are to become master clips and edits the clips or the voice track of a clip or clips in order to leave gaps for dropping one or more insert clips. For purposes of aiding in the retrieval of a particular clip, the producer may also input attributes to describe the sounds or the images in the media clips. Some examples of data that may serve as attributes are text keywords and key phrases, a sound clip preview, an image preview or any other data format that may characterize a media clip. The producer also determines among all available media clips those that are designed to be insert clips. Insert clips are fashioned in embodiments of the invention to be inserted or mixed at one or more locations in one or more media clips (e.g., master clips). In some instances insert clips are artfully recorded to fill a predetermined duration of time. If a master clip leaves a gap of 3 seconds to place a person's name, the insert clip is recorded to fill up the entire 3 seconds. Thus, the underlying music track seamlessly integrates the master clip together with the insert clip. An insert clip may itself be a master clip, if the insert clip is designed for mixing with other media clips. The system also provides a mechanism for associating insert clips with keywords, key phrases, sound preview, image preview and any other data format that allow the system to identify, classify, sort or other manipulate the insert clip for purposes of data management, this information is commonly known as metadata. The producer marks the clip with insertion points. The invention contemplates the use of various techniques for marking insertion point. The system may, for instance, embed a signal having an identifiable pattern to mark a particular location in a master clip of other type of media clip. The signal is checked for when the system is looking for a location to place an insert clip. Other approaches involve defining location information and storing the location information along with the media clips (e.g., in a database system) in the form of metadata associated with the clip. Alternatively, the system may utilize a plurality of master clips that each begin and/or end at the point where an insert clip is to be placed. When the master clips are merged together with one or more appropriate insert clips the result is a seamless media clip ready for playback. Using this technique a song or some other type of recorded information is split into a set of compressed or uncompressed sequential files (e.g., WAV, AVI, MP3, OGG, etc. . .), certain files are identified as insert files, the voice track is removed from the insert files, and an insert clip is recorded over the insert file. This allows for the appearance of an original recording since the background music continues to play along while a vocally personalized or context associated phrase is inserted into the media clip. In other embodiments of the invention, there is no need to remove the voice track because the insert clips are recorded without such information. Thus, the producer can create the insert clip by simply adding the appropriate voice data to the clip. In either case the master clips and insert clips are then merged together to create a finalized media clip. The system may generate the media clip on the fly by integrating the appropriate master clips and insert clips together, or it may retrieve a previously created media clip from the database. The producer of a media clip may define mixing and insertion properties. The system may use such properties to define the way an insert clip is merged together with one or more master clips. For instance, properties may enable the system to know when to fade the master clip signal to allow for seamless integration of an insert clip and slowly return to normal after the insert clip completes. The markings indicating the split and merge locations may be embedded codes or metadata stored separate from the clip.
  • The applications generated with embodiments of the invention reflect the flow of natural language. This is accomplished when a creator of the application writes at least one “generic” filler for every slot in the application and/or provides an alphabetic set of “generic” fillers for slots with highly variable information (e.g. name) and accounts for phonemic blending that occurs across closely annunciated phrases. If the writer has determined that certain phonemic blends will produce a “dirty” post-produced result, and has decided to include the content preceding and following a slot, the studio guide should incorporate the preceding and following content in every read of the variable. A “dirty” blend occurs whenever a phoneme (vowel or consonant utterance) crosses into the following word in natural language. For example the application: “Hey [description], it's Hilary again.” In this application, the slot is “description”, but a natural-sounding flow is best achieved by reading the “Hey” into the token (insert clip). Assuming we had several generic fillers: “guy”, “girl” and “friend” the studio guide for the celebrity voice artist shows the “Hey” with each read as “hey guy”, “hey girl” and “hey friend” for example.
  • In the example above, most English-speaking people would never insert a meaningful pause between the “Hey” and the word that follows. Instead, the words are blended together to form a single utterance that sounds a bit like “Heyguy” or “Heygirl”. Two rules for determining dirty blends are if at least one of two adjoining words in a script is a variable, and no natural-sounding pause would occur, then it is a dirty blend and if a preceding word in the script ends with the same consonant or vowel sound as the word that follows (e.g. first student, selling ingots) then it is a dirty blend. Finding this kind of blend in the script is a matter of reading it aloud at the same rate the writer intends to instruct the voice artist to read it, then adjusting the script and studio guide to accommodate recording before and after a variable, if needed.
  • FIG. 5 illustrates a flow chart showing the interactions of the system components for a telephonic ring tone and/or ringback tone embodiments of the invention. A user selects personalized content at 500, this is shown in the block as a client computer on the left associated with a user visiting an operator portal such as a telephone company portal and selecting personalized content. The operator portal communicates with the media server at 501 to determine what personalized applications are available and to obtain the associated variable names and to supply or receive any other information required, such as for example the bit rates desired for delivery to the specific target telephonic client device such as a particular cell phone. The user configures the ring tone or ringback tone at 502, this can involve selecting a particular celebrity voice to use for the personalized clip and filling out forms to account for the required insert clips, for example recipient name, caller name or any other variables required for generation of the resulting clip. The user interacts with the operator portal in this step. The user previews the clip at 503 when the operator portal redirects the user to a preview window to allow the user to access the clip from the media server. The user confirms the transaction at 504 via the operator portal. When the transaction is complete at 504, the operator portal retrieves the finished clip or clips and uses the clip for a ring tone, ringback tone or answering message as appropriate.
  • An example of an XML embodiment of an application is a single document may be utilized to describe the structure of the audio file, clip or clips to be delivered. The application XML is described in the table below:
    Element Function
    <?xml XML files begin with this element.
    version=’1.0’?> Attributes: N/A.
    Notes: None.
    <application> Root element for audio application.
    The body of every application file
    begins and ends with this tag.
    Attributes: none.
    Notes: None.
    <name> A plain text name for the application.
    Example: <name>A Cinderella Story</name>.
    Attributes: None.
    Notes: None.
    <id> The application ID. An integer identifier
    for the audio application. This value is
    passed in by a requestor and processed
    by the audio server to determine which
    application to serve.
    Example: <id>1000</id>.
    Attributes: None.
    Notes: The integer value used in the ID
    is system-unique.
    HTTP request passes this data as
    app_id=[id]
    <bitrate> The default output bitrate for the
    application.
    Example: <bitrate>16</bitrate>
    Attributes: None.
    Notes: Bitrate is an integer value.
    <samplerate> The default output sample rate for the
    application.
    Example: <samplerate>22050</samplerate>
    Attributes: None.
    Notes: Sample rate is in hertz.
    <channels> The default output channel count. The
    audio server currently supports mono
    and stereo output.
    Example: <channels>1</channels>
    Attributes: None.
    Notes: This element has 2 valid CDATA
    contents: 1 (mono) and 2 (stereo)
    <mime-type> The default output encoding.
    Example: <mime-type>audio/wav</mime-type>
    Example: <mime-type>audio/mpeg</mime-type>
    Attributes: None.
    Notes: This element has 2 valid CDATA
    contents: audio/wav (PCM), and audio/mpeg
    (MP3). Other formats are readily added.
    <segments> The number of pieces of audio used to
    assemble the file.
    Example: <segments count=”5”>
    Attributes: count.
    Notes: This element is a parent element
    of the <audio> tag
    described below, and its attribute
    “count” corresponds to the
    number of <audio> tags it contains.
    <audio> This element refers either to an audio
    file, or to a directory
    containing multiple audio files.
    Attributes: type, id
    Notes: The type attribute has two valid
    values, “static” which refers to
    static audio segments located at the
    application root, and “dynamic”,
    which refers to directories at the
    application root containing multiple
    audio files. The id attribute is the file
    or directory name, and omits file
    extensions (i.e. “.wav”) and
    trailing slashes.
  • For clients performing HTTP GET or POST operations, several of the application parameters in the application file can be overridden at request-time by appending values to the HTTP query string or including them in the HTTP POST payload. The overrides are described below:
    HTTP
    Element Override Function
    <mime-type>, enctype Overrides all of the default elements
    <bitrate>, for the application.
    <samplerate>, Format:
    and enctype=[format]-
    <channels> [bitrate]-[samplerate]-
    [channels]
    Example: enctype=pcm-8-8-1
    For example, this override may be used
    on an application whose deployed base
    audio is 16-bit, 22 KHz stereo, to
    deliver audio into a VoiceXML
    application that requires 8-bit,
    8Khz audio.
    n/a filename This override causes the audio server
    to return additional, protocol-
    appropriate Content-Disposition
    headers, using the filename specified
    by the client. This is useful for
    tagging returned content requested
    VIA CURL and other popular HTTP
    packages.
  • A sample application file for a ring tone incorporating a caller name, recipient name, and three static segments is shown in the table below:
    /dir/111/application.xml
    <?xml version=‘1.0’?>
    <application>
    <name>Daffy Duck Ringtone Demo</name>
    <id>111</id>
    <bitrate>16</bitrate>
    <samplerate>22050</samplerate>
    <channels>1</channels>
    <mime-type>audio/mpeg</mime-type>
    <segments count=“5”>
    <audio type=“dynamic” id=“recipient_name”/>
    <audio type=“static” id=“static_01”/>
    <audio type=“dynamic” id=“caller_name”/>
    <audio type=“static” id=“static_02”/>
    <audio type=“static” id=“static_03”/>
    </segments>
    </application>

    The transcript for the sample application is:
  • “[recipient name] This is Daffy Duck! [caller—name] is calling you. Don't tell me you're DUCKING your calls. . . Get it? Ducking your calls? Oh that's rich! Now come on my fine-feathered friend, ANSWER THE PHONE!”
  • To retrieve an mp3 preview personalized as follows. . .
  • “John! Oh, John! This is Daffy Duck! Bob is calling you. Don't tell me you're ducking your calls. . . Get it? DUCKING your calls? Oh that's rich! Now come on my fine-feathered friend, ANSWER THE PHONE!”
  • The integrator would issue the following HTTP request: http://audio.MEDIASERVERNAME.com/?app—id=111&recipient—name=john&caller—name=bo b
  • A media server or media generation engine may comprise many different types of hardware. An example of the type of hardware configuration may comprise Dell PowerEdge 2400 servers each with dual Pentium III Xeon processors with 512K L2 cache. Each server may be configured with 1 GB of main memory and 42GB (6×7GB) storage configured with software RAID 0+1. Alternatively the system may be deployed on higher density blade servers. Logically, systems that employ MPEG 3 encoding gain substantial performance benefits from faster processor speeds. Embodiments using seamless splicing of compressed formats may server higher numbers of users since the processing requirements of such implementations is significantly lower. This is due to the fact that encoding the output data does not have to occur on the entire output media clip as when raw insert clips are added to a raw master clip. Some compressed formats allow frames to be inserted in the middle of other frames without altering portions of the preceding or succeeding frames. These compression formats can be used in order to pre-encode master clips and pre-encode insert clips before splicing them together. This optimization can yield a two order of magnitude increase in numbers of users serviced per second versus a non-cached raw master and raw insert clip splice methodology and subsequent compression and network transmission.
  • Embodiments of the invention utilizing scalable architectures may transcode and/or compress the media clips resulting in a great reduction in network load offered by such compression. The system may perform WAV to MPEG 3 transcoding using LAME or any other encoder capable of compressing data into formats required for output by embodiments of the invention. While this scheme dramatically increases audio quality and/or reduces network demand by a dramatic ratio (10:1), transcoding and compression place heavy load on the media generation engine or media server in the case of context clip blending.
  • Another embodiment of the invention utilizes an encoder with settings designed to allow for seamless splicing of compressed media. This eliminates the need for a layer of compression engines dedicated to compressing the media clips and creates a system that is approximately 300 times faster than a brute force WAVE to MPEG-3 media clip cache-less personalization system.
  • Seamlessly splicing media clips may be performed for certain media types. Raw data types such as WAV, AIFF and AU format files are ordered in time without borrowing bits from preceding or succeeding frames and therefore may be sliced out and added in with impunity. Highly compressed formats may or may not allow for this type of manipulation of individual frames of data since highly compressed formats generally place data in easy-to-compress frames representing simple waveforms that should belong in a hard-to-compress frame. This interlacing of data makes the frames dependent upon one another.
  • MPEG-3 allows for compression with slight degradation of high end frequency spectrum by encoding frames to hold information only for the current frame. By setting the encoder to abandon the use of the bit reservoir and thereby degrading the frequency response slightly this is achieved. In addition, it is possible but more complex to use variable bit rate encoding with overlapping encodes and achieve frame independence but the recordings must overlap in time. Since the gain in frequency response is minimal and the calculations and bit manipulations are more complex embodiments of the invention using constant bit rate encoding without the bit reservoir may be used in situations where maximum sound quality is not required, and situations where maximum sound quality is required may use variable bit rate encoding with the higher complexity bit manipulation algorithms involved.
  • Depending on the encoder used for a given format, different artifacts may be created when encoding. For example, the LAME encoder software produces various blank spots on the front and end of encoded clips due to algorithms used in order to decode the clips. Certain encoders use MDCT/filterbank routines functionally similar to decoder routines and leave 528 sample delays at the front of encoded files.
  • For embodiments of the invention employing LAME, seamless splice media clips may be created by clipping the first granule (576 bits) of the encoding insert clip encoding using LAME software which contains MDCT coefficients and eliminating the ID3 metadata from the file and the last 288 bits at the end of the insert clip. The resulting media clip contains no front or back-end artifacts, metadata or data dependencies to hinder the independent insertion into a master clip.
  • In one scenario, a user utilizes the system embodying the invention to send customized messages (e.g., an invitation, advertisement, reminder, etc. .) to one or more other users (e.g., recipients). In this example, a user may connect to a server and input a list of other users who are to receive the customized message. The sending user may select a master clip for distribution and the system assembles a multimedia clip for distribution using the list of user information to identify an appropriate insert clip(s) to merge with the master clip. The system is also capable of retrieving context information to determine the best communication path to reach the recipient and/or the recipient's availability. The system may obtain other context information such as availability information, personal information (e.g. address and phone number), and any other context information useful for purposes of assembling and disseminating the multimedia clip. The system utilizes the context information in several ways. For example, the system may send messages at different times depending on the distance between the residence location of each recipient and the location of the meeting. The system may also send the message using different transport mechanisms depending upon the whereabouts of the recipient. If the user is currently using the Internet, the system may elect to email the message. Otherwise the system may opt to transmit an audio message to a voicemail system or to contact the user by making a cellular phone call.
  • In another scenario, the system retrieves recipient information (e.g. first and last names, his/her title etc.), and assembles a multimedia clip appropriate for each user. The system may for instance, alter the language, gender, tone, or any other modifiable aspects of the voice track depending upon the characteristics of the user. The system may also select an appropriate mechanism and format for the multimedia clip and thereby produces a multimedia clip specific to each user.
  • In one or more embodiments of the invention the personalization variables or insert variables are mapped to existing insert clips. For example, for an application that comprises a name variable and the application enables a user to type in a name for a person in which to personalize a message, non-existent names are mapped to the closest possible name. In this manner, a personalized message for a user name “Joseph” may map to “Joe”. In addition, decency standards may be applied for example when a user types in (or says) a word that is inappropriate for the insert clip, for example a curse word may be mapped to an acceptable word that is not offensive. Using 1300 names in the United States covers 80% of the population, so mapping all of the rest of the names to a generic name such as “sir” or “madame” for names that are not found yields 80% coverage for a small processing price.
  • As the generation of media clips may occur asynchronously with regards to their actual transmission, embodiments of the invention may be utilized by cell phone companies to obtain ring tones and ring back tones along with other personalized media, such as button vocalizations in a celebrity voice for example. As the generation of personalized media occurs when a user purchases a personalized ringback tone for example, the generation of the media clip for a given incoming phone number for a given caller id may occur at any time before the clip is actually used. The revenue generated from the sale of the personalized media clip comes from the telephone company which may charge their clients a different fee, or a fee for each use of the generated media clip.
  • Thus, a method and apparatus for generating and distributing a set of personalized media clips has been described. The claims however and the full scope of any equivalents are what defines the invention.

Claims (20)

1. A system for generating and distributing personalized media comprising:
an insert clip comprising personalized media;
a master clip comprising an insertion point;
a network interface;
a computer coupled with said network interface wherein said computer further comprises a memory device comprising said insert clip and said master clip;
a playback device wherein said playback device is identified to said computer with an identifier selected from the group consisting of RFID, credit card number, ANI and DNIS; and,
a process executing on said computer wherein said process is configured to combine said insert clip with said master clip at said insertion point to create a personalized media clip with seamless splicing at said insertion point wherein said personalized media clip is created for said playback device.
2. The system of claim 1 wherein said playback device is configured to ring with said personalized media clip.
3. The system of claim 2 wherein said personalized ring media clip comprises a celebrity voice.
4. The system of claim 1 wherein said playback device is configured to ringback with said personalized media clip.
5. The system of claim 4 wherein said personalized ring media clip comprises a celebrity voice.
6. The system of claim 1 wherein said insert clip and said master clip comprise compressed media and wherein said process is further configured to create said output clip via compressed seamless splicing at said insertion point.
7. The system of claim 6 wherein said compressed media comprises a format selected from the group consisting of MP3, OGG, Flash and video data.
8. The system of claim 1 further comprising a context clip comprising context information wherein said master clip further comprises a second insertion point and wherein said computer is further configured to combine said context clip with said master clip at said second insertion point with undetectable transitions at said second insertion point.
9. The system of claim 8 wherein said context information is selected from a group comprising time, date, location and temperature information.
10. The system of claim 8 wherein said insert clip, said master clip and said context clip comprise a celebrity voice.
11. The system of claim 1 wherein said playback device is selected from a group consisting of a browser, PDA, cell phone, GPS receiver, slot machine, loyalty card reader, credit card reader, ATM machine, kiosk, toy, digital cable set-top box, hotel door and personal computer.
12. A method for generating and distributing personalized media comprising:
obtaining an insert clip comprising personalized media;
obtaining a master clip comprising an insertion point;
coupling a computer coupled with a network interface wherein said computer further comprises a memory device comprising said insert clip and said master clip;
coupling a playback device with said network;
identifying said playback device to said computer with an identifier selected from the group consisting of RFID, credit card number, ANI and DNIS; and,
executing a process on said computer wherein said process is configured to combine said insert clip with said master clip at said insertion point to create a personalized media clip with seamless splicing at said insertion point wherein said personalized media clip is created for said playback device.
13. The method of claim 12 further comprising ringing said playback device with said personalized media clip.
14. The method of claim 13 wherein said personalized ring media clip comprises a celebrity voice.
15. The method of claim 12 further comprising providing a ringback to said playback device comprising said personalized media clip.
16. The method of claim 15 wherein said personalized ring media clip comprises a celebrity voice.
17. The method of claim 12 further comprising:
compressing said insert clip; and,
compressing said master clip for seamless splicing before said executing said process configured to combine said insert clip with said master clip.
18. The method of claim 12 further comprising:
obtaining a context clip and obtaining a context insert point,
blending said context clip seamlessly with said media clip at said context insert point.
19. The method of claim 12 further comprising:
formatting said media clip for said playback device and transmitting said media clip to said playback device.
20. A system for generating and distributing personalized media comprising:
means for obtaining an insert clip comprising personalized media;
means for obtaining a master clip comprising an insertion point;
means for coupling a computer coupled with a network interface wherein said computer further comprises a memory device comprising said insert clip and said master clip;
means for coupling a playback device with said network;
means for identifying said playback device to said computer with an identifier selected from the group consisting of RFID, credit card number, ANI and DNIS; and,
means for executing a process on said computer wherein said process is configured to combine said insert clip with said master clip at said insertion point to create a personalized media clip with seamless splicing at said insertion point wherein said personalized media clip is created for said playback device.
US11/530,750 2002-10-04 2006-09-11 System and method for generating and distributing personalized media Abandoned US20070028264A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/530,750 US20070028264A1 (en) 2002-10-04 2006-09-11 System and method for generating and distributing personalized media

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US41612702P 2002-10-04 2002-10-04
US10/605,527 US7123696B2 (en) 2002-10-04 2003-10-06 Method and apparatus for generating and distributing personalized media clips
US11/040,166 US7142645B2 (en) 2002-10-04 2005-01-20 System and method for generating and distributing personalized media
US11/530,750 US20070028264A1 (en) 2002-10-04 2006-09-11 System and method for generating and distributing personalized media

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/040,166 Continuation US7142645B2 (en) 2002-10-04 2005-01-20 System and method for generating and distributing personalized media

Publications (1)

Publication Number Publication Date
US20070028264A1 true US20070028264A1 (en) 2007-02-01

Family

ID=37695854

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/040,166 Expired - Lifetime US7142645B2 (en) 2002-10-04 2005-01-20 System and method for generating and distributing personalized media
US11/530,750 Abandoned US20070028264A1 (en) 2002-10-04 2006-09-11 System and method for generating and distributing personalized media

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/040,166 Expired - Lifetime US7142645B2 (en) 2002-10-04 2005-01-20 System and method for generating and distributing personalized media

Country Status (1)

Country Link
US (2) US7142645B2 (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060168616A1 (en) * 2002-12-13 2006-07-27 Sony Electronics Inc. Targeted advertisement selection from a digital stream
US20060174264A1 (en) * 2002-12-13 2006-08-03 Sony Electronics Inc. Content personalization for digital conent
US20070008322A1 (en) * 2005-07-11 2007-01-11 Ludwigsen David M System and method for creating animated video with personalized elements
US20080162130A1 (en) * 2007-01-03 2008-07-03 Bodin William K Asynchronous receipt of information from a user
US20080215979A1 (en) * 2007-03-02 2008-09-04 Clifton Stephen J Automatically generating audiovisual works
US20080262980A1 (en) * 2007-04-19 2008-10-23 Ridemakerz, Llc System and method for assembly of modular toy vehicle
US20080263454A1 (en) * 2007-04-17 2008-10-23 Ridemakerz, Llc Method of providing a consumer profile accessible by an on-line interface and related to retail purchase of custom personalized toys
US20080273689A1 (en) * 2007-05-04 2008-11-06 Lucent Technologies Inc. Tone advisor, a tone assisting system and a method of associating tones with callers
US20090013351A1 (en) * 2005-03-02 2009-01-08 Matsushita Electric Industrial Co., Ltd. Distribution Device and Reception Device
US20090042622A1 (en) * 2007-08-06 2009-02-12 Mspot, Inc. Method and apparatus for creating, using, and disseminating customized audio/video clips
WO2009020573A1 (en) * 2007-08-06 2009-02-12 Mspot, Inc. Method and apparatus for creating and disseminating audio-video clips and answer tones
US20090064249A1 (en) * 2007-08-31 2009-03-05 Sony Corporation Distribution network and method
US20090150925A1 (en) * 2007-12-06 2009-06-11 At&T Labs, Inc. System and Method of Providing An Alert
US20090281794A1 (en) * 2008-05-07 2009-11-12 Ben-Haroush Sagi Avraham Method and system for ordering a gift with a personalized celebrity audible message
US20100125795A1 (en) * 2008-07-03 2010-05-20 Mspot, Inc. Method and apparatus for concatenating audio/video clips
US20100125868A1 (en) * 2008-11-17 2010-05-20 At&T Intellectual Property I, L.P. System and Method for Content Delivery
US20100183149A1 (en) * 1999-11-09 2010-07-22 Sony Corporation Method for simulcrypting scrambled data to a plurality of conditional access devices
US20110023059A1 (en) * 2008-04-10 2011-01-27 Mccallister Benjamin Method and apparatus for content replacement in live production
US20110099590A1 (en) * 2009-10-26 2011-04-28 Lg Electronics Inc. Digital broadcasting system and method of processing data in digital broadcasting system
WO2011055274A1 (en) * 2009-11-06 2011-05-12 Ericsson Television Inc. Systems and methods for replacing audio segments in an audio track for a video asset
US20110161377A1 (en) * 2005-04-07 2011-06-30 Ingram Dv Llc System and method for correlating a first title with a second title
WO2012033903A1 (en) * 2010-09-08 2012-03-15 Sony Corporation System and method for providing video clips, and the creation thereof
US20130046781A1 (en) * 2011-08-19 2013-02-21 Stargreetz, Inc. Design, creation, and delivery of personalized message/audio-video content
US20140229576A1 (en) * 2013-02-08 2014-08-14 Alpine Audio Now, LLC System and method for buffering streaming media utilizing double buffers
US9128981B1 (en) 2008-07-29 2015-09-08 James L. Geer Phone assisted ‘photographic memory’
US20160100208A1 (en) * 2009-09-30 2016-04-07 At&T Intellectual Property I, Lp Apparatus and Method for Media Detection and Replacement
US9792361B1 (en) 2008-07-29 2017-10-17 James L. Geer Photographic memory
US20210224346A1 (en) 2018-04-20 2021-07-22 Facebook, Inc. Engaging Users by Personalized Composing-Content Recommendation
US11307880B2 (en) 2018-04-20 2022-04-19 Meta Platforms, Inc. Assisting users with personalized and contextual communication content
US11676220B2 (en) 2018-04-20 2023-06-13 Meta Platforms, Inc. Processing multimodal user input for assistant systems
US11715042B1 (en) 2018-04-20 2023-08-01 Meta Platforms Technologies, Llc Interpretability of deep reinforcement learning models in assistant systems
US11886473B2 (en) 2018-04-20 2024-01-30 Meta Platforms, Inc. Intent identification for agent matching by assistant systems

Families Citing this family (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050055433A1 (en) * 2003-07-11 2005-03-10 Boban Mathew System and method for advanced rule creation and management within an integrated virtual workspace
US20050100000A1 (en) * 2003-11-07 2005-05-12 Foursticks Pty Ltd Method and system for windows based traffic management
US7616741B2 (en) 2004-02-06 2009-11-10 At&T Intellectual Property I, L.P. System and method for facilitating a custom ring in connection with a call
EP1711901A1 (en) * 2004-02-06 2006-10-18 Sequoia Media Group, LLC Automated multimedia object models
US20060007328A1 (en) * 2004-07-07 2006-01-12 Paulsen Chett B Method of utilizing media cue cards for instruction in amateur photography and videography
US20060123975A1 (en) * 2004-12-09 2006-06-15 Swanson Nancy L Systems and methods for creating personalized or tailored compositions
US20060136556A1 (en) * 2004-12-17 2006-06-22 Eclips, Llc Systems and methods for personalizing audio data
US8027455B2 (en) * 2005-01-04 2011-09-27 Avaya Inc. Ringbacks based on extrinsic information
FR2880716A1 (en) * 2005-01-13 2006-07-14 Gemplus Sa CUSTOMIZATION OF SERVICE IN A TERMINAL DEVICE
US20060210033A1 (en) * 2005-03-17 2006-09-21 Lucent Technologies, Inc. Context sensitive ring back service
US8321041B2 (en) * 2005-05-02 2012-11-27 Clear Channel Management Services, Inc. Playlist-based content assembly
WO2006138620A2 (en) * 2005-06-15 2006-12-28 Music Choice Systems and methods for facilitating the acquisition of content
US8060591B1 (en) * 2005-09-01 2011-11-15 Sprint Spectrum L.P. Automatic delivery of alerts including static and dynamic portions
US7653418B1 (en) * 2005-09-28 2010-01-26 Sprint Spectrum L.P. Automatic rotation through play out of audio-clips in response to detected alert events
TWI302691B (en) * 2005-10-21 2008-11-01 Delta Electronics Inc Portable electronic device with speech synthesize and music prelude functions
US20070263348A1 (en) * 2005-11-02 2007-11-15 Dei Headquarters Inc. Versatile portable electronic device docking station with slidable connector
US7613287B1 (en) 2005-11-15 2009-11-03 TellMe Networks Method and apparatus for providing ringback tones
US7546148B2 (en) * 2005-12-05 2009-06-09 Sbc Knowledge Ventures, L.P. Method and system of creating customized ringtones
US20070133785A1 (en) * 2005-12-09 2007-06-14 Cotignola Christopher J Method and apparatus for providing personalized ringback greetings
US9160845B1 (en) 2006-03-16 2015-10-13 Sprint Spectrum L.P. Dynamic provisioning of a switch with custom ringback media
US9013511B2 (en) * 2006-08-09 2015-04-21 Qualcomm Incorporated Adaptive spatial variant interpolation for image upscaling
US20080057902A1 (en) * 2006-08-30 2008-03-06 Laurent Frederick Sidon Distribution of customized ringtones for mobile devices
CN1964396B (en) * 2006-09-30 2011-07-06 华为技术有限公司 A method, system and device to copy color ring
US8081751B1 (en) 2006-10-04 2011-12-20 Sprint Spectrum L.P. Method for triggering content download during call setup
US8059800B1 (en) 2006-10-17 2011-11-15 Sprint Spectrum L.P. Method for viral distribution of ringback media
US20080126294A1 (en) * 2006-10-30 2008-05-29 Qualcomm Incorporated Methods and apparatus for communicating media files amongst wireless communication devices
US20080115170A1 (en) * 2006-10-30 2008-05-15 Qualcomm Incorporated Methods and apparatus for recording and sharing broadcast media content on a wireless communication device
US8713191B1 (en) * 2006-11-20 2014-04-29 Sprint Spectrum L.P. Method and apparatus for establishing a media clip
US7996550B2 (en) * 2006-11-30 2011-08-09 Red Hat, Inc. Peer-to-peer download with quality of service fallback
KR101354759B1 (en) * 2007-01-03 2014-01-22 엘지전자 주식회사 Digital rights management method for terminal
US20080167968A1 (en) * 2007-01-07 2008-07-10 Eddy Cue Creating and Purchasing Ringtones
US7747290B1 (en) * 2007-01-22 2010-06-29 Sprint Spectrum L.P. Method and system for demarcating a portion of a media file as a ringtone
US20080183385A1 (en) * 2007-01-30 2008-07-31 Horn Kyle R System for the delivery of location-specific audio tours
US8280025B2 (en) * 2007-06-29 2012-10-02 Microsoft Corporation Automated unique call announcement
US8639276B2 (en) * 2007-06-29 2014-01-28 Microsoft Corporation Mobile telephone interactive call disposition system
CN101094469A (en) * 2007-07-17 2007-12-26 华为技术有限公司 Method and device for creating prompt information of mobile terminal
US8295684B2 (en) * 2007-10-08 2012-10-23 Sony Computer Entertainment America Inc. Method and system for scaling content for playback with variable duration
US9020937B2 (en) * 2007-12-01 2015-04-28 Trenton Draper Personal greeting/information/advertising system and method
US20110119058A1 (en) * 2007-12-10 2011-05-19 4419341 Canada, Inc. Method and system for the creation of a personalized video
US8223932B2 (en) * 2008-03-15 2012-07-17 Microsoft Corporation Appending content to a telephone communication
EP2104105A1 (en) * 2008-03-20 2009-09-23 British Telecommunications Public Limited Company Digital audio and video clip encoding
US8103666B1 (en) 2008-08-19 2012-01-24 Miller Frank W Variable audio/visual data incorporation system and method
US8145722B2 (en) * 2008-08-28 2012-03-27 Nathan Douglas Koons Media transfer system and associated methods
CN101686222A (en) * 2008-09-27 2010-03-31 中兴通讯股份有限公司 Media resource system and method for providing media resources
US20110029352A1 (en) * 2009-07-31 2011-02-03 Microsoft Corporation Brokering system for location-based tasks
US8359382B1 (en) 2010-01-06 2013-01-22 Sprint Communications Company L.P. Personalized integrated audio services
US8315920B2 (en) * 2010-03-09 2012-11-20 At&T Intellectual Property I, L.P. Method for automating onboarding of user generated ringback tones to sales distribution channel
US10078695B2 (en) 2011-04-11 2018-09-18 Evertz Microsystems Ltd. Methods and systems for network based video clip generation and management
US8719369B1 (en) 2011-04-25 2014-05-06 Brian Brown Customized message generation
US9948695B2 (en) * 2012-03-16 2018-04-17 Alcatel Lucent Enabling delivery of protected content using unprotected delivery services
US8719169B2 (en) * 2012-04-13 2014-05-06 Metrodigi, Inc. Digital media personalization
US20140040256A1 (en) * 2012-08-06 2014-02-06 Aol Inc. Systems and methods for processing electronic content
CN103594104A (en) * 2012-08-15 2014-02-19 腾讯科技(深圳)有限公司 Method and system for acquiring multimedia interest point, method and device for multimedia playing
US9871842B2 (en) * 2012-12-08 2018-01-16 Evertz Microsystems Ltd. Methods and systems for network based video clip processing and management
US20160162842A1 (en) * 2014-12-04 2016-06-09 Dogpatch Technology, Inc. Messaging system and method
US10299016B2 (en) * 2015-06-17 2019-05-21 Lomotif Private Limited Method for generating a composition of audible and visual media
CN105812920B (en) * 2016-03-14 2019-04-16 腾讯科技(深圳)有限公司 Media information processing method and media information processing unit
US11062497B2 (en) * 2017-07-17 2021-07-13 At&T Intellectual Property I, L.P. Structuralized creation and transmission of personalized audiovisual data
EP3678036A4 (en) * 2017-10-17 2020-07-08 Sony Corporation Information processing device, information processing method, and program
US20190155568A1 (en) * 2017-11-20 2019-05-23 Serious Biz, LLC Live presentation delivery aid system and method
EP3506255A1 (en) * 2017-12-28 2019-07-03 Spotify AB Voice feedback for user interface of media playback device
US10832734B2 (en) * 2019-02-25 2020-11-10 International Business Machines Corporation Dynamic audiovisual segment padding for machine learning
US20230208977A1 (en) * 2020-03-19 2023-06-29 Gary Forster Process for personalizing an electronic greeting with a voice message

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5444767A (en) * 1994-03-09 1995-08-22 Gregory J. Goetcheus Systems and methods for recording and delivering personalized audio messages
US6122617A (en) * 1996-07-16 2000-09-19 Tjaden; Gary S. Personalized audio information delivery system
US6137834A (en) * 1996-05-29 2000-10-24 Sarnoff Corporation Method and apparatus for splicing compressed information streams
US6388560B1 (en) * 1996-03-19 2002-05-14 Sony Corporation Messaging terminal with voice notification
US20020138302A1 (en) * 2001-03-21 2002-09-26 David Bodnick Prepaid telecommunication card for health care compliance
US6529586B1 (en) * 2000-08-31 2003-03-04 Oracle Cable, Inc. System and method for gathering, personalized rendering, and secure telephonic transmission of audio data
US6694219B2 (en) * 2001-02-01 2004-02-17 Quad/Graphics, Inc. Method and apparatus for assembling personalized electronic media into printed products
US20040125925A1 (en) * 2002-10-01 2004-07-01 Rodolphe Marsot Method of instant voice messaging and device for the implementation of such a message

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5444767A (en) * 1994-03-09 1995-08-22 Gregory J. Goetcheus Systems and methods for recording and delivering personalized audio messages
US6388560B1 (en) * 1996-03-19 2002-05-14 Sony Corporation Messaging terminal with voice notification
US6137834A (en) * 1996-05-29 2000-10-24 Sarnoff Corporation Method and apparatus for splicing compressed information streams
US6122617A (en) * 1996-07-16 2000-09-19 Tjaden; Gary S. Personalized audio information delivery system
US6529586B1 (en) * 2000-08-31 2003-03-04 Oracle Cable, Inc. System and method for gathering, personalized rendering, and secure telephonic transmission of audio data
US6694219B2 (en) * 2001-02-01 2004-02-17 Quad/Graphics, Inc. Method and apparatus for assembling personalized electronic media into printed products
US20020138302A1 (en) * 2001-03-21 2002-09-26 David Bodnick Prepaid telecommunication card for health care compliance
US20040125925A1 (en) * 2002-10-01 2004-07-01 Rodolphe Marsot Method of instant voice messaging and device for the implementation of such a message

Cited By (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100183149A1 (en) * 1999-11-09 2010-07-22 Sony Corporation Method for simulcrypting scrambled data to a plurality of conditional access devices
US8488788B2 (en) 1999-11-09 2013-07-16 Sony Corporation Method for simulcrypting scrambled data to a plurality of conditional access devices
US8645988B2 (en) * 2002-12-13 2014-02-04 Sony Corporation Content personalization for digital content
US20060174264A1 (en) * 2002-12-13 2006-08-03 Sony Electronics Inc. Content personalization for digital conent
US20060168616A1 (en) * 2002-12-13 2006-07-27 Sony Electronics Inc. Targeted advertisement selection from a digital stream
US8667525B2 (en) 2002-12-13 2014-03-04 Sony Corporation Targeted advertisement selection from a digital stream
US8850479B2 (en) * 2005-03-02 2014-09-30 Panasonic Corporation Distribution device and reception device
US20090013351A1 (en) * 2005-03-02 2009-01-08 Matsushita Electric Industrial Co., Ltd. Distribution Device and Reception Device
US20110161377A1 (en) * 2005-04-07 2011-06-30 Ingram Dv Llc System and method for correlating a first title with a second title
US20070008322A1 (en) * 2005-07-11 2007-01-11 Ludwigsen David M System and method for creating animated video with personalized elements
US8077179B2 (en) 2005-07-11 2011-12-13 Pandoodle Corp. System and method for creating animated video with personalized elements
US8219402B2 (en) * 2007-01-03 2012-07-10 International Business Machines Corporation Asynchronous receipt of information from a user
US20080162130A1 (en) * 2007-01-03 2008-07-03 Bodin William K Asynchronous receipt of information from a user
US8347213B2 (en) 2007-03-02 2013-01-01 Animoto, Inc. Automatically generating audiovisual works
WO2008109233A1 (en) * 2007-03-02 2008-09-12 Animoto, Llc Automatically generating audiovisual works
US8717367B2 (en) 2007-03-02 2014-05-06 Animoto, Inc. Automatically generating audiovisual works
US20080215979A1 (en) * 2007-03-02 2008-09-04 Clifton Stephen J Automatically generating audiovisual works
US20080263454A1 (en) * 2007-04-17 2008-10-23 Ridemakerz, Llc Method of providing a consumer profile accessible by an on-line interface and related to retail purchase of custom personalized toys
US8548819B2 (en) 2007-04-17 2013-10-01 Ridemakerz, Llc Method of providing a consumer profile accessible by an on-line interface and related to retail purchase of custom personalized toys
US20080262980A1 (en) * 2007-04-19 2008-10-23 Ridemakerz, Llc System and method for assembly of modular toy vehicle
WO2008131257A1 (en) * 2007-04-19 2008-10-30 Ridemakerz, Llc System and method for assembly of modular toy vehicle
US20080273689A1 (en) * 2007-05-04 2008-11-06 Lucent Technologies Inc. Tone advisor, a tone assisting system and a method of associating tones with callers
US9185239B2 (en) * 2007-05-04 2015-11-10 Alcatel Lucent Tone advisor, a tone assisting system and a method of associating tones with callers
US20090042622A1 (en) * 2007-08-06 2009-02-12 Mspot, Inc. Method and apparatus for creating, using, and disseminating customized audio/video clips
US9002410B2 (en) 2007-08-06 2015-04-07 Samsung Electronics Co., Ltd. Method and apparatus for creating, using, and disseminating customized audio/video clips
WO2009020573A1 (en) * 2007-08-06 2009-02-12 Mspot, Inc. Method and apparatus for creating and disseminating audio-video clips and answer tones
US8135114B2 (en) 2007-08-06 2012-03-13 Mspot, Inc. Method and apparatus for creating an answer tone for a computing device with phone capabilities or a telephone
US20090041208A1 (en) * 2007-08-06 2009-02-12 Mspot, Inc. Method and apparatus for creating an answer tone for a computing device with phone cababilities or a telephone
GB2452315B (en) * 2007-08-31 2012-06-06 Sony Corp A distribution network and method
US20090064249A1 (en) * 2007-08-31 2009-03-05 Sony Corporation Distribution network and method
US20090150925A1 (en) * 2007-12-06 2009-06-11 At&T Labs, Inc. System and Method of Providing An Alert
US20110023059A1 (en) * 2008-04-10 2011-01-27 Mccallister Benjamin Method and apparatus for content replacement in live production
US10200749B2 (en) * 2008-04-10 2019-02-05 Gvbb Holdings S.A.R.L. Method and apparatus for content replacement in live production
US20090281794A1 (en) * 2008-05-07 2009-11-12 Ben-Haroush Sagi Avraham Method and system for ordering a gift with a personalized celebrity audible message
US20100125795A1 (en) * 2008-07-03 2010-05-20 Mspot, Inc. Method and apparatus for concatenating audio/video clips
US11308156B1 (en) 2008-07-29 2022-04-19 Mimzi, Llc Photographic memory
US11086929B1 (en) 2008-07-29 2021-08-10 Mimzi LLC Photographic memory
US11782975B1 (en) 2008-07-29 2023-10-10 Mimzi, Llc Photographic memory
US9792361B1 (en) 2008-07-29 2017-10-17 James L. Geer Photographic memory
US9128981B1 (en) 2008-07-29 2015-09-08 James L. Geer Phone assisted ‘photographic memory’
US8713601B2 (en) * 2008-11-17 2014-04-29 At&T Intellectual Property I, L.P. System and method for content delivery
US20100125868A1 (en) * 2008-11-17 2010-05-20 At&T Intellectual Property I, L.P. System and Method for Content Delivery
US9906826B2 (en) * 2009-09-30 2018-02-27 At&T Intellectual Property I, L.P. Apparatus and method for media detection and replacement
US20160100208A1 (en) * 2009-09-30 2016-04-07 At&T Intellectual Property I, Lp Apparatus and Method for Media Detection and Replacement
US10531143B2 (en) 2009-09-30 2020-01-07 At&T Intellectual Property I, L.P. Apparatus and method for media detection and replacement
US20110099590A1 (en) * 2009-10-26 2011-04-28 Lg Electronics Inc. Digital broadcasting system and method of processing data in digital broadcasting system
US8250612B2 (en) * 2009-10-26 2012-08-21 Lg Electronics Inc. Digital broadcasting system and method of processing data in digital broadcasting system
WO2011055274A1 (en) * 2009-11-06 2011-05-12 Ericsson Television Inc. Systems and methods for replacing audio segments in an audio track for a video asset
WO2011082092A1 (en) * 2009-12-29 2011-07-07 Mspot, Inc. Method and apparatus for concatenating audio/video clips
WO2012033903A1 (en) * 2010-09-08 2012-03-15 Sony Corporation System and method for providing video clips, and the creation thereof
US20130046781A1 (en) * 2011-08-19 2013-02-21 Stargreetz, Inc. Design, creation, and delivery of personalized message/audio-video content
WO2013028516A1 (en) * 2011-08-19 2013-02-28 Stargreetz, Inc. Design, creation, and delivery of personalized message/audio-video content
US20140229576A1 (en) * 2013-02-08 2014-08-14 Alpine Audio Now, LLC System and method for buffering streaming media utilizing double buffers
US11249774B2 (en) 2018-04-20 2022-02-15 Facebook, Inc. Realtime bandwidth-based communication for assistant systems
US20230186618A1 (en) 2018-04-20 2023-06-15 Meta Platforms, Inc. Generating Multi-Perspective Responses by Assistant Systems
US11249773B2 (en) 2018-04-20 2022-02-15 Facebook Technologies, Llc. Auto-completion for gesture-input in assistant systems
US11301521B1 (en) 2018-04-20 2022-04-12 Meta Platforms, Inc. Suggestions for fallback social contacts for assistant systems
US11307880B2 (en) 2018-04-20 2022-04-19 Meta Platforms, Inc. Assisting users with personalized and contextual communication content
US11308169B1 (en) 2018-04-20 2022-04-19 Meta Platforms, Inc. Generating multi-perspective responses by assistant systems
US11231946B2 (en) 2018-04-20 2022-01-25 Facebook Technologies, Llc Personalized gesture recognition for user interaction with assistant systems
US11368420B1 (en) 2018-04-20 2022-06-21 Facebook Technologies, Llc. Dialog state tracking for assistant systems
US11429649B2 (en) 2018-04-20 2022-08-30 Meta Platforms, Inc. Assisting users with efficient information sharing among social connections
US11544305B2 (en) 2018-04-20 2023-01-03 Meta Platforms, Inc. Intent identification for agent matching by assistant systems
US11676220B2 (en) 2018-04-20 2023-06-13 Meta Platforms, Inc. Processing multimodal user input for assistant systems
US11245646B1 (en) * 2018-04-20 2022-02-08 Facebook, Inc. Predictive injection of conversation fillers for assistant systems
US11688159B2 (en) 2018-04-20 2023-06-27 Meta Platforms, Inc. Engaging users by personalized composing-content recommendation
US11704900B2 (en) 2018-04-20 2023-07-18 Meta Platforms, Inc. Predictive injection of conversation fillers for assistant systems
US11704899B2 (en) 2018-04-20 2023-07-18 Meta Platforms, Inc. Resolving entities from multiple data sources for assistant systems
US11715289B2 (en) 2018-04-20 2023-08-01 Meta Platforms, Inc. Generating multi-perspective responses by assistant systems
US11715042B1 (en) 2018-04-20 2023-08-01 Meta Platforms Technologies, Llc Interpretability of deep reinforcement learning models in assistant systems
US11721093B2 (en) 2018-04-20 2023-08-08 Meta Platforms, Inc. Content summarization for assistant systems
US11727677B2 (en) 2018-04-20 2023-08-15 Meta Platforms Technologies, Llc Personalized gesture recognition for user interaction with assistant systems
US20210224346A1 (en) 2018-04-20 2021-07-22 Facebook, Inc. Engaging Users by Personalized Composing-Content Recommendation
US11886473B2 (en) 2018-04-20 2024-01-30 Meta Platforms, Inc. Intent identification for agent matching by assistant systems
US11887359B2 (en) 2018-04-20 2024-01-30 Meta Platforms, Inc. Content suggestions for content digests for assistant systems
US11908179B2 (en) 2018-04-20 2024-02-20 Meta Platforms, Inc. Suggestions for fallback social contacts for assistant systems
US11908181B2 (en) 2018-04-20 2024-02-20 Meta Platforms, Inc. Generating multi-perspective responses by assistant systems

Also Published As

Publication number Publication date
US7142645B2 (en) 2006-11-28
US20050185918A1 (en) 2005-08-25

Similar Documents

Publication Publication Date Title
US7142645B2 (en) System and method for generating and distributing personalized media
EP1847102A1 (en) System and method for generating and distributing personalized media
US7123696B2 (en) Method and apparatus for generating and distributing personalized media clips
US20060136556A1 (en) Systems and methods for personalizing audio data
US6965770B2 (en) Dynamic content delivery responsive to user requests
US7761293B2 (en) Spoken mobile engine
US7408106B2 (en) Tele-karaoke
US9318100B2 (en) Supplementing audio recorded in a media file
US20030028380A1 (en) Speech system
US8086464B2 (en) System for low-latency animation of talking heads
US7277855B1 (en) Personalized text-to-speech services
US9196241B2 (en) Asynchronous communications using messages recorded on handheld devices
US20080189099A1 (en) Customizable Delivery of Audio Information
CN101557483B (en) Methods and systems for generating a media program
US20090204402A1 (en) Method and apparatus for creating customized podcasts with multiple text-to-speech voices
EP1277200A1 (en) Speech system
WO2008001500A1 (en) Audio content generation system, information exchange system, program, audio content generation method, and information exchange method
WO2007002448A1 (en) Apparatus, system, method, and article of manufacture for automatic context-based media transformation and generation
WO2002001392A2 (en) Networked audio posting method and system
WO2002009088A2 (en) Personalized disc jockey system
EP1829344A1 (en) Method and system for synthesizing a video message
US8219402B2 (en) Asynchronous receipt of information from a user
US20230208977A1 (en) Process for personalizing an electronic greeting with a voice message
WO2003063133A1 (en) Personalisation of the acoustic presentation of messages synthesised in a terminal
AU2989301A (en) Speech system

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION