WO2016077653A1 - Novel systems and methods for processing an object - Google Patents

Novel systems and methods for processing an object Download PDF

Info

Publication number
WO2016077653A1
WO2016077653A1 PCT/US2015/060487 US2015060487W WO2016077653A1 WO 2016077653 A1 WO2016077653 A1 WO 2016077653A1 US 2015060487 W US2015060487 W US 2015060487W WO 2016077653 A1 WO2016077653 A1 WO 2016077653A1
Authority
WO
WIPO (PCT)
Prior art keywords
craft
image
imaging
dimensional
dimensional image
Prior art date
Application number
PCT/US2015/060487
Other languages
French (fr)
Inventor
William T. Manak
Robert Ty HOBLITT
Original Assignee
Manak William T
Hoblitt Robert Ty
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Manak William T, Hoblitt Robert Ty filed Critical Manak William T
Publication of WO2016077653A1 publication Critical patent/WO2016077653A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENTS OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D11/00Passenger or crew accommodation; Flight-deck installations not otherwise provided for
    • B64D11/02Toilet fittings
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/9515Objects of complex shape, e.g. examined with use of a surface follower device

Definitions

  • the present invention generally relates to processing objects. More particularly, the present invention relates to novel systems and methods that effectively allow processing, such as inspecting, manufacturing, maintaining, repairing, overhauling of objects (e.g. , crafts, vehicles, imaging sensors and other equipment that is used in a processing facility).
  • objects e.g. , crafts, vehicles, imaging sensors and other equipment that is used in a processing facility.
  • the present arrangements and present teachings provide novel systems and methods for processing objects.
  • the present arrangements provide craft processing facilities.
  • the present craft processing facility includes: (i) at least one member chosen from a group comprising one or more sidewalls, floor and roof, such that one or more of the sidewalls are adjacent to a craft during an inspection operation, the floor is disposed below one or more of the sidewalls and the roof is disposed above one or more of the sidewalls, and wherein at least one of one or more the sidewalls, the floor or the roof define a three-dimensional robotic envelope that is sufficiently spacious to receive at least a portion of the craft; (ii) one or more imaging sensors coupled to or disposed on any one member chosen from the group comprising one or more of sidewalls, floor and roof, and one or more of the imaging sensors operative to obtain a two-dimensional and/or a three- dimensional image of at least the portion of the craft; and (iii) a reference point serving as (0,
  • the craft processing facility further comprising one or more sensor support structures, each of which supports one or more of the imaging sensors and is disposed on the floor and/or the roof of the craft processing facility.
  • Sensor support structure may be any rigid structure that effectively supports the imaging sensors.
  • imaging sensors include components that enable both transmission and detection modes of operations.
  • one or more of the imaging sensors include at least one member chosen from a group comprising laser, beam expander, receiving telescope, array of receivers and data processor.
  • a craft need not be inside the robotic envelope for the image sensors to capture a craft image. Rather, in one embodiment of the present arrangements, one or more of the imaging sensors have a field of view that captures a portion of the craft disposed outside the robotic envelope. As will be explained later such an embodiment is useful in identifying and verifying crafts and its components and/or subcomponents.
  • the present craft processing facility includes a guy line, such that one or more of the imaging sensors are movably disposed on the guy line.
  • the craft processing facility further include markers that ascertain the position of one or more of the imaging sensors on the guy line. In this embodiment, the makers detect the signal transmitted by the imaging sensors and are, therefore, able to ascertain their position relative to the reference point.
  • the craft processing facility of the present arrangements further includes one or more rails such that one or more of the imaging sensors are movably disposed on each of the rails.
  • one or more of the imaging sensors and are capable of movement to locations on the rail that provide a field of view that does not capture blocked or shadowed areas of the craft.
  • a single imaging sensor is used and this sensor captures up to a 360° field of view inside the robotic envelope.
  • the craft processing facility further includes robots that carry out one processing activity chosen from a group comprising inspection, manufacture, maintenance, repair and overhaul of an object (e.g. , a craft or a conveyance system).
  • robots that carry out one processing activity chosen from a group comprising inspection, manufacture, maintenance, repair and overhaul of an object (e.g. , a craft or a conveyance system).
  • One such exemplar location method includes: (i) receiving at least a portion of a craft inside a robotic envelope for processing; (ii) determining a position of an imaging sensor that is coupled to or disposed on at least one member chosen from a group comprising one or more sidewalls, floor and roof, such that one or more of the sidewalls are adjacent to the craft during the processing operation, the floor is disposed below one or more of the sidewalls and the roof is disposed above one or more of the sidewalls, and wherein at least one of one or more the sidewalls, the floor or the roof define a three-dimensional robotic envelope that is sufficiently spacious to receive at least a portion of the craft, and wherein the position is determined relative to a reference point serving as (0,0,0) coordinates of the three-dimensional robotic envelope; (iii) imaging at least a portion of the craft using the imaging sensors to develop a two-dimensional and/or a three-dimensional image of at least a portion of said craft
  • the location method further includes: (i) moving the imaging sensor to another position; (ii) determining location of another position of the imaging sensor relative to the reference point of the three-dimensional robotic envelope; (iii) imaging at least the portion and/or another portion of the craft using the imaging sensor to develop another two- dimensional image and/or another three-dimensional image of at least the portion and/or the specified another portion of the craft, and wherein during the imaging, the imaging sensor is held stationary along X, Y and Z-axes at another position.
  • the present location methods may further include stitching the two-dimensional image and the specified another two-dimensional image and/or stitching the three-dimensional image and the specified another three-dimensional image to form a resulting two-dimensional image and/or a resulting three-dimensional image, respectively.
  • the present location methods further include confirming location in space of the craft and/or locating in space the specified another portion of the craft relative to the reference point by using the resulting two-dimensional image and/or the resulting three-dimensional image.
  • the imaging sensor when a single image is to be captured, the imaging sensor is, at a certain position, held stationary along X, Y and Z-axes, but is free to rotate in that position. Further, during an imaging operation, when multiple image frames may be collected from a single imaging sensor, the imaging sensor is free to move from one position to another.
  • the present location methods further include: (i) imaging, using the imaging sensor, at least a portion of a component and/or a subcomponent of the craft to develop a two-dimensional image and/or a three-dimensional image of at least the portion of the component and/or the subcomponent, and wherein during the imaging, the imaging sensor is held stationary along X, Y and Z-axes at the position (of the imaging sensor); and (ii) locating in space the component and/or the subcomponent of the craft relative to the reference point by using the two-dimensional image and/or the three-dimensional image of at least the portion of the component and/or the subcomponent.
  • the present location methods may further still include: (i) mobilizing one or more robots, each having one or more of the sensors, to safety zones within the robotic envelope; and (ii) positioning one or more of the robots at approach points and/or at a standoff distance to commence processing of the component and/or the subcomponent.
  • Representative processing techniques include at least one technique chosen from a group comprising inspecting, manufacturing, maintaining, repairing and overhauling.
  • the location method further includes: (i) determining another position of another imaging sensor that is coupled to or disposed on at least one member chosen from the group comprising one or more sidewalls, floor and roof, and wherein the specified another position is determined relative to the reference point, and wherein the imaging sensor captures a field of view and the specified another imaging sensor captures another field of view that is not the same as the field of view, but the field of view and the specified another field of view include one or more overlapping portions; (ii) imaging at least the portion and/or another portion of the craft using the specified another imaging sensor to develop another two- dimensional image and/or another three-dimensional image of at least the portion and/or another portion of the craft, and wherein during the imaging step, the specified another imaging sensor is held stationary along X, Y and Z-axes at the specified another position, and wherein the two- dimensional image
  • the present teachings provide methods of processing an object.
  • One such exemplar method includes: (i) receiving at least a portion of an object inside a robotic envelope; (ii) determining a position of an imaging sensor that is coupled to or disposed on at least one member chosen from the group comprising one or more of sidewalls, floor and roof, and wherein the position is determined relative to a reference point serving as (0,0,0) coordinates of the three-dimensional robotic envelope; (iii) imaging at least a portion of the object using the imaging sensors to develop a two-dimensional image and/or a three-dimensional image of at least a portion of the object, and wherein during the imaging step, the imaging sensors are held stationary along X, Y and Z-axes at the position (of the imaging sensor); (iv) locating in space the object relative to the reference point by using the two-dimensional image and/or the three- dimensional image; (v) imaging at least a portion of one or more components and/or one or more subcomponents of
  • the present teachings provide methods of identification and/or verification.
  • One such exemplar method includes: (i) obtaining a reference craft image; (ii) receiving a candidate craft within a field of view of one or more sensors; (iii) imaging the candidate craft to produce a candidate craft image; (iv) comparing the reference craft image with the candidate craft image to obtain a residual between the reference craft image and the candidate craft image; and (v) determining whether the residual is within a margin of error and identifying and/or verifying the candidate craft.
  • Each of the reference craft image and the candidate craft image are preferably a two-dimensional image and/or a three-dimensional image.
  • the margin of error represents a value that is less than about 1 % deviation (in linear measurement units) from location of identifying or verifying features present in the reference craft image. In other embodiments, this margin of error is less than about 2%, about 3%, about 4% or about 5% deviation (in linear measurement units) from location of identifying or verifying features present in the reference craft image. [0019] In one embodiment of the present teachings, where the reference craft image is stored on a database, the step of obtaining the reference craft image includes retrieving the reference craft image from the database.
  • the step of obtaining the reference craft image includes: (i) receiving a reference craft within one or more of the field of views of one or more of the sensors; and (ii) imaging the reference craft to produce the reference craft image.
  • stitching different reference craft images obtained from different field of views may be required to produce the ultimate reference craft image that is used in subsequent steps.
  • the reference craft is the candidate craft and the reference craft image includes a prior candidate craft image obtained from the candidate craft prior to the above-mentioned step of receiving the candidate craft.
  • the methods of identification and/or verification may further include identifying presence or modification of one or more components and/or one or more sub-components on the candidate craft, and wherein one or more of the components and/or one or more of the subcomponents are not present or modified on the reference craft image.
  • the identification and/or verification methods may further include identifying absence of one or more components and/or one or more sub-components on the candidate craft, and wherein one or more of the components and/or one or more of the subcomponents are present on the reference craft image.
  • the present identification and/or verification methods may further include identifying the model of one or more components and/or one or more subcomponents on the candidate craft.
  • the step of determining includes identifying and/or verifying torque and/or alignment of one or more components and/or one or more subcomponents in the candidate craft relative to torque and/or alignment of one or more of the components and/or one or more of the subcomponents in the reference craft.
  • comparing the reference craft image with the candidate craft image includes implementing one algorithm chosen from a group comprising geometric recognition algorithm, photometric recognition algorithm and three-dimensional surface recognition algorithm.
  • the present teachings provide measurement methods.
  • One such exemplar method includes: (i) receiving a candidate craft within one or more field of views of one or more imaging sensors; (ii) imaging the candidate craft to produce a candidate craft image; (iii) obtaining a reference craft image and one or more reference geometric measurements associated with the reference craft image; (iv) comparing the reference craft image with the candidate craft image and obtaining a residual between the reference craft image and the candidate craft image; (v) measuring one or more geometrical features of the residual to arrive at one or more residual geometric measurements associated with the residual; and (vi) adding or subtracting reference geometric measurements to the residual geometric measurements and arriving at candidate geometric measurements.
  • a geometric measurement is one property chosen from a group comprising surface area, volume and linear measurement.
  • the measurement methods of the present teachings use a two-dimensional image and/or three-dimensional image to provide a wide-variety of measurement data.
  • the present measurement methods provide the dimensions of that component and/or subcomponent.
  • the present measurement methods provide length, area and volume of the detected defect.
  • the present measurement methods provide thickness and density values.
  • Figure 1 shows a perspective view of a craft inspection facility, according to one embodiment of the present arrangements, for imaging a craft using imaging sensor devices disposed on the sidewalls and floor of the craft inspection facility.
  • Figure 2 shows a perspective view of a craft inspection facility, according to another embodiment of the present arrangements and that is substantially similar to the craft inspection facility of Figure 1 , except does not include imaging sensor devices on the floor of the craft inspection facility.
  • Figure 3 shows a perspective view of a craft inspection area, according to one embodiment of the present arrangements, for inspecting a craft using imaging sensor devices, at least some of which are disposed on a sensor support structure.
  • Figure 4 shows a block diagram of certain components inside an imaging sensor device, according to one embodiment of the present arrangements and that allow the imaging sensor device to both transmit and receive light energy.
  • Figure 5 shows a block diagram of a positioning system, according to one embodiment of the present arrangements and that determines a position of an object, such as a craft and/or its components/subcomponents.
  • Figure 6 a block diagram of a laser activating and recording system, according to one embodiment of the present arrangements and that activates a laser to obtain data that is recorded in a recording system.
  • Figure 7 shows a perspective view of a craft inspection facility, according to a yet another embodiment of the present arrangements and that is capable of identifying, positioning and measuring a craft outside the craft inspection facility.
  • Figure 8 shows a perspective view of a craft inspection facility, according to a yet another embodiment of the present arrangements and that includes imaging sensor devices, which are mobile on a cable (e.g. , a guy wire) and are capable of determining their location from markers located inside the craft inspection facility.
  • imaging sensor devices which are mobile on a cable (e.g. , a guy wire) and are capable of determining their location from markers located inside the craft inspection facility.
  • Figure 9 shows a perspective view of a craft inspection facility, according to a yet another embodiment of the present arrangements and that includes imaging sensor devices, which are mobile to avoid shadow or block areas.
  • Figure 10 shows a perspective view of a craft inspection facility, according to a yet another embodiment of the present arrangements and that includes an imaging sensor device having a field of view that is approximately 360°.
  • FIG. 1-10 describe processing, according to the present arrangements and teachings, of crafts, the present teachings are not so limited.
  • the present arrangements and present teachings described herein extend generally to objects that may not be crafts. Examples of such objects include vehicles, equipment (e.g., imaging sensors, robots, and external objects inside a processing facility that are not imaging sensors and robots).
  • process steps have not been described in detail in order to not unnecessarily obscure the invention.
  • various well-known image recognition algorithms are described without providing details that are well known to those skilled in the image recognition art.
  • FIG. 1 shows a perspective view of a craft inspection facility 100, according to one embodiment of the present arrangements and that is used for at least one of inspection, location, measurement and identification of a craft 108.
  • craft inspection facility 100 may be an airplane hangar as shown in Figure 1.
  • Craft inspection facility 100 has dimensions that may be expressed in terms of three-linear axes, i.e., X, Y and Z.
  • craft inspection facility 100 includes a robotic envelope 102 (e.g. , a robotic envelope that is capable of receiving at least a portion or an entire craft) that may be defined by one or more sidewalls, which lie in the X-Z and/or Y-Z planes.
  • craft inspection facility 100 also includes one or more imaging sensor devices 104A-104H that are capable of imaging a portion of or an entire object, e.g. , a craft or inspection equipment (i. e. , robots or other nondestructive inspection equipment), that falls within a field of view 112.
  • imaging sensor devices 104A-104H In an operative state of the imaging sensor devices 104A-104H, at least some of the imaging sensor devices have a field of view that, with respect to a point of reference 110, coordinates (0,0,0), captures an image of at least a portion of an object.
  • the different images obtained, with respect to point of reference 110, from at least some of imaging sensor devices 104A-104H are stitched together to form a three-dimensional image of the object, e.g. , craft 108.
  • the present teachings believe that absolute coordinates of all locations on the object with respect to point of reference 110 are known so long as the positions of imaging sensor devices 104A-104H relative to the point of reference are known.
  • imaging sensor devices 106A-106B are disposed on the floor to image an underside of craft 108. It is important to note that imaging sensor devices 104B, 104C, 104F and 104G may also serve as floor-imaging sensor devices, but in those instances where a field of view of these imaging sensor devices is not able to capture an image because it covers a blocked and/or shadowed area, floor-imaging sensor devices 106A-106B are used. As a result, at least some of the imaging sensor devices 104A-104H and/or 106A-106B are used to capture a three-dimensional image of a craft. In one embodiment of the present arrangements, floor-imaging sensor devices are substantially similar to imaging sensor devices, except the floor-imaging sensor devices are disposed on the floor and imaging sensor devices need not be.
  • the above-described imaging sensor devices serve to inspect, identify, locate and measure an object, such as craft 108.
  • Figure 2 shows another craft inspection facility 200, according to another
  • Craft inspection facility 200 of Figure 2 is substantially similar to craft inspection facility 100 of Figure 1, i.e. , robotic envelope 102 and imaging sensor devices 104A-104H are substantially similar to their counterparts in Figure 2, i.e., robotic envelope 202 and imaging sensor devices 204A-204H, except craft inspection facility 200 of Figure 2 does not include floor sensors 106 A and 106B, as shown in Figure 1.
  • Craft inspection facility 200 of Figure 2 like craft inspection facility 100 of Figure 1, may house craft 208, have a designated point of reference 210, i.e.
  • coordinates (0,0,0), and a field of view 212 associated with each of imaging sensor devices 204A-204H need not be inside robotic envelope 202 or for that matter, inside craft inspection facility 200. Rather point of reference, coordinates (0,0,0) may be located anywhere and so long as the distance from one or more of the imaging sensors and/or floor-imaging sensors to the point of reference is known, craft 208 and its components and/or subcomponents may be located in space.
  • FIG. 3 shows a craft inspection area 300, according to one embodiment of the present arrangements and that may serve the same purpose as craft inspection facility 100 of Figure 1. As is shown in Figure 3, it is not necessary, in the present arrangements, to have sidewalls to effectively inspect, locate, measure and identify a craft 308. To this end, craft inspection area 300 includes a floor (e.g., an apron) 303 that supports a craft 308 and sensor support structures 322A-322E. Preferably each of sensor support structures 322A-322E is fitted with imaging sensor devices 324A-324E and floor imaging sensor devices 306A-306E.
  • floor e.g., an apron
  • sensor support structures 322A-322E Preferably each of sensor support structures 322A-322E is fitted with imaging sensor devices 324A-324E and floor imaging sensor devices 306A-306E.
  • each of sensor support structures 322A-322E may have associated with it an imaging sensor device (i.e., one imaging sensor device chosen from 324A-324E) and a floor imaging sensor device (i.e., one floor imaging sensor device chosen from 306A-306E). Similar to each of imaging sensor devices 104A-104H and floor imaging sensor devices 106 A and 106B, each of imaging sensor devices 306A-306E and floor imaging sensor devices 306A-306E have a field of view 312.
  • a point of reference 310 i.e., coordinates (0,0,0), is located on one of sensor support structures 322A-322E.
  • Figure 3 shows that point of reference 310 is located on sensor support structure 322C.
  • each sensor support structure include an imaging sensor device and/or a floor-imaging sensor device. It is important to note that although sidewalls are not present, a robotic envelope (not shown to simplify illustration, but that is similar to robotic envelopes 102 or 202 of Figures 1 and 2, respectively) is defined in the space above the floor or the apron.
  • Figures 4A and 4B show certain components inside an imaging sensor device, according to one embodiment of the present arrangements and that allow the imaging sensor device to both transmit and receive light energy, respectively.
  • Figure 4A shows a configuration 400A of an imaging sensor device (e.g. , imaging sensor device 104A of Figure 1) or a floor imaging sensor device (e.g., floor imaging sensor devices 106A or 106B), emphasizing a transmission mode of operation.
  • a laser 430A generates an incident beam of light 436A that passes through a beam spreader 432A that strikes and illuminates a portion of or an entire object 434A, (e.g. , a portion of or an entire craft 108 of Figure 1).
  • a reflected light 436B resulting from illumination of object 434 A is produced and the components inside an imaging sensor device that receive reflected and/or scattered light 436B come into play.
  • Figure 4B is presented to show another configuration 400B of an imaging sensor device (e.g. , imaging sensor device 104 A of Figure 1) or a floor imaging sensor device (e.g., floor imaging sensor devices 106A or 106B) that emphasizes a receiving mode of operation.
  • reflected and/or scattered light 436B as the case may be, from object 434B is collected at a lensing structure 438B and projected onto an array of receivers 440B, where it is analyzed (e.g. , intensity and/or frequency of reflected and/or scattered light 436B is analyzed).
  • Figures 4A and 4B show certain components in their non-operative state.
  • Figure 4A shows that during the transmission mode of operation, lensing structure 438A and array of receivers 440A are not active.
  • Figure 4B shows that during the detection mode of operation, laser 430B and beam spreader 432B are not active.
  • laser 430A is a pulsed laser.
  • each time a laser is pulsed a light beam is incident on a portion of or an entire craft during a transmission mode of operation, and a signal associated with a reflected and/or scattered light is then collected during the receiver mode of operation (that follows the transmission mode of operation).
  • beam spreader 432B defines a field of view 436A to image object 434A.
  • a resulting reflected and/or scattered light 436B arrives at receiving telescope 438A and a photodetector 440.
  • Measurements, which may be in the form of electronic signals, obtained at photodetector 440 are conveyed and analyzed by a data processor 442B.
  • both modes of operation may be performed contemporaneously.
  • both the transmission and detection mode of operation may be carried out at the same time and that certain components (shown as inactive in one mode operation in Figures 4A and 4B) are, in fact, active in both modes of operation.
  • certain components shown as inactive in one mode operation in Figures 4A and 4B are, in fact, active in both modes of operation. For example, if laser 430B is not pulsed and operates as a continuously beam generating device, then certain components may be active during both modes of operation.
  • FIG. 5 shows a positioning system 500, according to one embodiment of the present arrangements and that determines a position of a craft and/or its components/subcomponents in space.
  • Positioning system 500 includes a laser 530A and beam expander 532A, which are substantially the same as their counterparts in Figure 4A, i.e., laser 430A and beam expander 432A.
  • incident beam 536A generated inside positioning system 500 is substantially similar to incident beam 436A generated during transmission mode of operation, as shown by Figure 4 A, of an imaging sensor device (e.g. , imaging sensor device 104 A of Figure 1).
  • positioning system 500 includes components that process a resulting reflected and/or scattered light 536B, which is substantially similar to reflected and/or scattered light 436B. Accordingly, positioning system 500 includes receiving telescope 538A and a photodetector 540 that are examples of lensing structure 438A and array of receivers 440A, respectively, of Figure 4A.
  • Positioning system 500 also includes additional structure not described or suggested in Figures 4A and 4B.
  • positioning system 500 includes transmitting optics 544A and 544B, i.e. , mirrors, that direct incident beam 536A to strike an object 534, e.g. , a craft 108 of Figure 1.
  • positioning system includes an analog to digital converter 541 for converting the analog signal, which may be obtained from an array of detectors, such as photodetectors 540, to a digital signal that is processed by a signal and data processor 542.
  • signal and data processor 542 stitches images obtained from each of the imaging sensor devices (e.g.
  • image sensor devices 104A-104H and floor imaging sensor devices 106A and 106B to form a single two-dimensional and/or a single three-dimensional image.
  • image sensor devices 104A-104H and floor imaging sensor devices 106A and 106B Preferably, however, only images obtained from imaging sensor devices that have an overlapping field of view are stitched together to form the two-dimensional and/or three-dimensional image.
  • the processed digital signal is then provided to and/or displayed at an output device 543, where the two- dimensional and/or the three-dimensional image may be displayed.
  • FIG. 6 a block diagram of a laser activating and recording system 600, according to one embodiment of the present arrangements and that activates a laser 630 to generate an output signal, which is ultimately recorded.
  • the output signal represents some information, e.g. , information regarding location, identification and measurement of objects, crafts, components and/or subcomponent, external objects (e.g. , imaging sensors, floor-imaging sensors, robots or equipment) that is the subject of an inquiry.
  • Laser activating and recording system 600 includes laser 630, which is preferably a pulsed laser that generates an incident beam 636A in pulses. The incident beam generated at each laser pulse traverses some optical components to illuminate an object.
  • a resulting reflected and/or scattered light 636B is collected at a telescopic mirror 638 and conveyed through a light guide 650 to a filter 652. After filtering background noise and other undesired information, reflected and/or scattered light 636B is measured by a detector 640 to generate a signal that is, preferably, commensurate with that measurement by detector 640.
  • the signal stored and/or processed by a data logger 642 may be the signal that is produced by an imaging sensor device (e.g. , one of image sensing devices 104A-104H and floor imaging sensor devices 106 A and 106B of Figure 1).
  • Computer 643 produces a two-dimensional and/or a three-dimensional image of the object, with respect to a point of reference, by stitching images captured by different imaging sensor devices having overlapping field of views.
  • the computer forms the two-dimensional and/or three-dimensional image of the object or determines that such an image may be formed, it sends out an instruction to fire another laser pulse and records another image of the object. In this manner, it is possible for laser activating and recording system 600 to track movement of an object from one location to another location.
  • laser activating and recording system 600 effectively tracks various types of information, e.g. , inspection related information, such as defects and location, identification, verification and/or measurement of a craft, imaging sensors, external objects (e.g. , equipment) that may be present inside a craft inspection facility or a craft inspection area.
  • Figure 7 shows various imaging sensor devices 704A-704I, each of which has its own field of view, that are disposed inside a robotic envelope 702 of a craft inspection facility 700.
  • One of imaging sensor devices 7041 has a field of view 712 that captures an image of a front portion of craft 708 as it approaches robotic envelope 702, which is substantially similar to robotic envelope 102 shown in craft inspection facility 100 of Figure 1.
  • imaging sensor device 7041 includes a pulsed laser and laser activating and data recording system 600 of Figure 6.
  • imaging sensor device 7041 may be used for gathering information about approaching craft 708 relative to a point of reference 710, e.g. , coordinates (0,0,0) even though the craft has not yet arrived within the robotic envelope 702.
  • FIG. 8 shows a craft inspection facility 800, which is substantially similar to craft inspection facility 700 of Figure 7, i.e.
  • a robotic envelope 802, imaging sensor devices 804 A- 804D, and a point of reference 810 are substantially similar to their counterparts in Figure 7, i.e., robotic envelope 702, imaging sensor devices 704A-704I, and point of reference 710, except craft inspection facility 800 shows a craft 808 inside robotic envelope 802, an imaging sensor device 804D having a field of view 812 that is imaging a side portion of craft 808 and markers 854A-854C on a sidewall of robotic envelope 802. Furthermore, some of imaging sensor devices 804A and 804C are installed on a cable (also sometimes referred to as a "guy line") and other imaging sensor devices 804B and 804D are installed on another cable.
  • a cable also sometimes referred to as a "guy line
  • imaging sensor devices 804A-804D are capable of movement along the associated cables, i.e., those cables upon which they rest and may be held during an operative state of the imaging sensor devices.
  • markers 854A-854C serve in ascertaining the location of one or more of imaging sensor devices 804A-804D, relative to point of reference 810, that may have been displaced from one position to another along the cable.
  • a displaced imaging sensor device may send an input signal to one or more of markers 854A-854C, which are held at a predetermined location inside or outside robotic envelope 802.
  • the response signal(s), received from one or more of markers 854A-854C provide information on a new location of the displaced imaging sensor device relative to point of reference 810. This method of determining the location of a displaced imaging sensor device may be thought of as "triangulation" of the imaging sensor device.
  • an imaging sensor device is disposed on a floor and/or a sidewall of a robotic envelope or disposed on a cable
  • the present arrangements offer designs that allow for ascertaining the location of a displaced imaging sensor device inside a robotic envelope or outside the robotic envelope.
  • One reason why it is important to displace an imaging sensor device is because a field of view of a particular imaging sensor device may be blocked or may capture a shadowed area and, therefore, provides no information about the subject object, such as a craft.
  • Figure 9 shows another movement configuration of an imaging sensor device.
  • a craft inspection facility 900 in accordance with one embodiment of the present arrangements, includes a robotic envelope 902 for inquiring information about a craft 908 relative to a point of reference 910.
  • an imaging sensor device is initially in a first position 904 where its field of view 912 is blocked or captures a shadowed area and, therefore, provides no information about a craft 908.
  • markers e.g. , markers 854A-854C of Figure 8 are not shown in this embodiment to simplify illustration, they may very well be used to ascertain the position of the displaced imaging sensor device so that more accurate information of the craft relative to the point of reference may be obtained.
  • craft inspection facility 900 is substantially similar to craft inspection facility 200 of Figure 2. It is apparent from the description of the present arrangements, that it is not necessary to have multiple imaging sensor devices, and that a single imaging sensor device, deployed properly within the robotic envelope, may effectively provide information regarding a subject craft.
  • Figure 10 shows a craft inspection facility 1000, according to one embodiment of the present arrangements, including a robotic envelope 1002, which in turn is designed to collect information about a craft 1008 using a single imaging sensor device 1004.
  • single imaging sensor device 1004 may be disposed on any structure 1005 that allows its movement, e.g. , cable or track, and has approximately a 360° field of view.
  • present object processing facilities e.g. , craft inspection facilities
  • craft inspection facility 1000 robotic envelope 1002, imaging sensor devices 1004, field of view 1012, point of reference 1010 is substantially similar to their counterparts in other figures presented herein, it is noteworthy that these arrangements discussed inside a craft inspection facility may very well be carried out using one or more sensor support structures, e.g. , sensor support structures 322A-322E, on a craft inspection area, e.g. , craft inspection area 303, that does include sidewalls.
  • sensor support structures e.g. , sensor support structures 322A-322E
  • a craft inspection area e.g. , craft inspection area 303
  • embodiments described herein that appear to require one or more robotic envelopes may well be implemented inside a craft inspection area (e.g. , craft inspection area 300 of Figure 3), which does not have sidewalls.
  • imaging sensor device may include certain relevant components shown in Figures 4, 5 and 6 that facilitate in providing information, e.g. , at least one of inspection (defect information), location,
  • the present teachings provide, among other methods, methods for locating an object (e.g. , a craft) in space. According to certain embodiment of the present teachings, this method is not necessarily carried out using the structures and/or configurations shown in Figures 1 -10.
  • Method of locating preferably begins with a step of receiving at least a portion of a craft inside a robotic envelope (e.g. , craft in Figures 1-3 and 8-10) for processing.
  • a step of determining is carried out.
  • a position of an imaging sensor e.g. , imaging sensors in Figures 1-3 and 8-10) is determined.
  • the imaging sensor may be coupled to or disposed on at least one member chosen from a group comprising one or more sidewalls, floor and roof.
  • the sidewalls are adjacent to the craft.
  • the floor is disposed below one or more of the sidewalls
  • the roof is disposed above one or more of the sidewalls.
  • At least one of one or more the sidewalls, the floor or the roof define a three-dimensional robotic envelope (e.g. , robotic envelope in Figures 1 -3 and 8- 10) that is sufficiently spacious to receive at least a portion of the craft.
  • the position of the imaging sensor is determined relative to a reference point serving as (0,0,0) coordinates of the three-dimensional robotic envelope.
  • an imaging step may commence.
  • the imaging step at least a portion of the craft is imaged using the imaging sensors to develop a two-dimensional and/or a three-dimensional image of at least a portion of the craft.
  • the imaging sensor is preferably held stationary along X, Y and Z-axes at that position. In one embodiment of the present teachings, however, while the imaging sensor is held stationary along X, Y and Z-axes at a particular position, it is free to rotate in that position.
  • a locating step is initiated. Specifically, using the two-dimensional image and/or the three-dimensional image obtained from the previous imaging step, the craft is located in space relative to the reference point. In other words, the images obtained from the imaging step provide the coordinates of the detected object. To this end, the disclosure pertaining to Figures 4, 5 and 6 describes one exemplar manner by which the coordinates of the detected object are obtained.
  • the present teachings provide other embodiments of methods of locating.
  • the location method proceeds to a step of moving the imaging sensor to another or new position.
  • a determining step includes determining the new location or position of the imaging sensor (at another position) relative to the reference point of the three- dimensional robotic envelope.
  • This imaging step includes imaging at least the portion (that was imaged at the previous position) and/or another portion of the craft using the imaging sensor. As a result, another two-dimensional image and/or another three-dimensional image of at least the portion and/or another portion of the craft is/are obtained.
  • the imaging sensor in this imaging step also, is held stationary along X, Y and Z-axes at the imaging sensor' s new location or position.
  • the present teachings provide for processing the images obtained from the first imaging step (when the imaging sensor was at the first position) and images obtained from the second imaging step (when the imaging sensor was the second or new position) to produce a resulting image.
  • the method of location may include stitching the first two- dimensional image and the second two-dimensional image and/or stitching the first three- dimensional image and the second three-dimensional image to form a resulting two-dimensional image and/or a resulting three-dimensional image, respectively.
  • the present location methods preferably further include confirming location in space of the craft and/or locating in space another portion of the craft (which was imaged during the second imaging step) relative to the reference point by using the resulting two-dimensional image and/or the resulting three-dimensional image.
  • the present teachings also provide for locating one or more components and/or one or more subcomponents of an object, e.g. , a craft, in space relative to the reference point.
  • certain preferred embodiments of the present location methods further include imaging, using the above-mentioned imaging sensor, at least a portion of a component and/or a subcomponent of the craft to develop a two-dimensional image and/or a three-dimensional image of at least the portion of the component and/or the subcomponent.
  • the imaging sensor is preferably held stationary along X, Y and Z-axes.
  • the present teachings provide for carrying out many different types of processes.
  • the method of location or another process method may be implemented. Regardless of which method is applied, the present teachings provide that after the component and/or the subcomponent of the craft is/are located in space, a step of mobilizing is carried out.
  • This step includes mobilizing one or more robots, each having one or more of the sensors to safety zones within the robotic envelope.
  • a positioning step includes positioning one or more of the robots at approach points or at standoff distance (that is associated with a robot dedicated to particular inspection method) to commence processing of the component and/or the subcomponent.
  • Representative processing techniques include inspecting, manufacturing, repairing and overhauling.
  • the location method may proceed to a determining step.
  • This step involves determining relative to the same reference point as the first imaging sensor, another or a new position of another or new imaging sensor, which is coupled to or disposed on at least one of— one or more sidewalls, floor or roof.
  • the imaging sensor captures a new field of view that is different from the field of view captured by the first imaging sensor.
  • the two different fields of views include one or more overlapping portions.
  • an imaging step may commence.
  • This imaging step includes imaging at least the portion and/or another portion of the craft using the new or second imaging sensor to develop another two-dimensional image and/or another three- dimensional image of at least the portion and/or another portion of the craft.
  • the second imaging sensor is held stationary along X, Y and Z-axes (at the position of the second imaging sensor).
  • the first two-dimensional image and/or the first three-dimensional image obtained from the first imaging sensor and the second two- dimensional image and/or the second three-dimensional image obtained from the second imaging sensor have two-dimensional overlapping portions and/or three-dimensional overlapping portions.
  • the two-dimensional overlapping portions and/or the three- dimensional overlapping portions are stitched to form a resulting two-dimensional image and/or a resulting three-dimensional image, respectively.
  • the present teachings allow locating in space the craft relative to the reference point.
  • the present teachings recognize that, using two or more imaging sensors, a component and/or a subcomponent of the craft is also located relative to a reference point in space.
  • the present methods are not limited to crafts and extend to objects generally. In certain embodiments, the present teachings provide methods of processing an object.
  • a step of determining includes determining a position of an imaging relative to a reference point serving as (0,0,0) coordinates of the three-dimensional robotic envelope.
  • the imaging sensor may be coupled to or disposed on at least one of— one or more sidewalls, floor or roof.
  • an imaging step involves imaging at least a portion of the object using the imaging sensors to develop a two-dimensional image and/or a three-dimensional image of at least a portion of the object.
  • the imaging sensors are held stationary along X, Y and Z-axes at the position (of the imaging sensor).
  • certain embodiments of the present methods proceed to locating in space the object relative to the reference point.
  • an imaging step includes imaging at least a portion of one or more components and/or one or more subcomponents of the located object using the imaging sensors to develop a two- dimensional image and/or a three-dimensional image of at least the portion of one or more of the components and/or one or more of the subcomponents of the object.
  • the imaging sensors are held stationary along X, Y and Z-axes at the position (of the imaging sensor).
  • certain embodiments of the present teachings proceed to locating in space one or more of the components and/or one or more of the subcomponents of the object relative to the reference point.
  • one or more of the components and/or one or more of the subcomponents of the object are ready to undergo processing.
  • Representative processing techniques include any one of removing, installing, cleaning, surface preparing, coating, polishing or painting.
  • the present teachings provide methods of identification and/or verification.
  • One such exemplar method begins with a step of obtaining a reference craft image.
  • a receiving step includes receiving a candidate craft within a field of view of one or more sensors.
  • an imaging step includes imaging the candidate craft to produce a candidate craft image.
  • one exemplar identification and/or verification method proceeds to a comparing step that includes comparing the reference craft image with the candidate craft image to obtain a residual between the reference craft image and the candidate craft image.
  • a determining step inquires whether the residual is within a margin of error and identifying and/or verifying the candidate craft.
  • the reference craft image and the candidate craft image may be preferably a two-dimensional image and/or a three-dimensional image.
  • the margin of error represents a value that is less than about 1 % deviation (in linear measurement units) from location of identifying or verifying features present in the reference craft image. In other embodiments, this margin of error is less than about 2%, about 3%, about 4% or about 5% deviation (in linear measurement units) from location of identifying or verifying features present in the reference craft image.
  • the reference craft image is stored on a database.
  • the step of obtaining includes retrieving the reference craft image from the database.
  • the step of obtaining the reference craft image begins with a step of receiving a reference craft within one or more of the field of views of one or more of the sensors.
  • a step of imaging involves imaging the reference craft to produce the reference craft image.
  • stitching different reference craft images obtained from different field of views may be required to produce the ultimate reference craft image that is used in subsequent steps.
  • the reference craft is the candidate craft and the reference craft image includes a prior candidate craft image obtained from the candidate craft prior to the above-mentioned step of receiving the candidate craft. Stated another way, the prior candidate craft image may be obtained during the candidate craft's prior visit.
  • a reference craft or its image is not needed and the candidate craft itself is used to generate the reference image.
  • the methods of identification and/or verification may further include identifying presence or modification of one or more components and/or one or more sub-components on the candidate craft or its image, when they were not present or modified on the reference craft image.
  • the identification and/or verification methods may further include identifying absence of one or more components and/or one or more sub-components on the candidate craft or its image, when they were present on the reference craft image.
  • the present identification and/or verification methods may further include identifying the model of one or more components and/or one or more subcomponents on the candidate craft.
  • identifying the model of one or more components and/or one or more subcomponents on the candidate craft By way of example, an image captured from one or more imaging sensors that are positioned proximate to a location that displays the craft's identification information, e.g. , tail number, model number and/or serial information, effectively provide the identification information for the craft.
  • the step of determining includes identifying and/or verifying torque and/or alignment of one or more components and/or one or more subcomponents in the candidate craft relative to torque and/or alignment of one or more of the components and/or one or more of the subcomponents in the reference craft.
  • the determining step establishes whether a horizontal stabilizer component of the aircraft is out of alignment.
  • the determining step established whether a fuselage component of the aircraft is torqued relative to the aircraft' s prior inspection.
  • the determining step establishes if another subcomponent, e.g.
  • winglets are added to a component, e.g. , wings, since the aircraft's prior visit. Establishing the presence and/or absence of these conditions of the craft and/or additions or subtractions from the craft prevent robots, if used, from colliding with the craft and also allow for effective robotic pathways that increase throughput.
  • two images— reference craft image and candidate craft— are electronically overlaid.
  • this step may use certain recognition algorithms that identify surface features by extracting landmarks, or features, from an image of the candidate craft's and/or reference craft's surface. For example, if an algorithm may analyze the relative position, size, and/or shape of a component and/or subcomponent on the candidate craft, then these features may be used to search for other images of a reference craft with matching features. Other algorithms normalize a gallery of surface images and then compress the surface recognition data, only saving the data in the image that is useful for candidate craft recognition. A candidate craft's image data is then compared with a reference craft' s image data.
  • One of the recognition methods is based on template matching techniques applied to a set of salient craft surface or perimeter features, providing a sort of compressed craft representation.
  • Recognition algorithms can be divided into two main approaches, geometric, which looks at distinguishing features, or photometric, which is a statistical approach that distills an image into values and compares the values with templates to eliminate variances.
  • Representative recognition algorithms include Principal Component Analysis using eigenfaces, Linear
  • the present teachings recognize that improved accuracies during the comparison step may be achieved by using the three-dimensional surface recognition technique.
  • This technique uses 3D sensors to capture information about the shape of a craft, and/or component and/or subcomponent of the craft. This information is then used to identify distinctive features on the surface of the craft and/or its component and/or subcomponent, such as the contour of the cockpit windows, nose, and crown above the cabin area.
  • Three-dimensional surface recognition technique is that it is not affected by changes in lighting like other techniques. It can also identify a craft from a range of external viewing angles, including a profile view.
  • Three-dimensional data points from a craft vastly improve the precision of craft surface recognition.
  • the three-dimensional recognition technique is enhanced by the development of sophisticated imaging sensors that do a better job of capturing three-dimensional craft surface imagery.
  • the sensors work by projecting structured light (e.g. , from a laser source 430A, 530A or 630 of Figures 4, 5 and 6, respectively) onto the craft surface. Up to a dozen or more of these image sensors may be placed on the same CMOS chip— such that each image sensor captures a different part of the spectrum.
  • One such exemplar method begins with receiving a candidate craft within one or more field of views of one or more imaging sensors. Then a step of imaging involves imaging the candidate craft to produce a candidate craft image. Next, an obtaining step includes obtaining a reference craft image and one or more reference geometric measurements associated with the reference craft image. Next, a comparing step requires that the reference craft image be compared with the candidate craft image to obtain a residual between the reference craft image and the candidate craft image. The exemplar method then proceeds to a measuring step, in which one or more geometrical features of the residual are measured to arrive at one or more residual geometric measurements associated with the residual.
  • an adding or subtracting step includes adding or subtracting reference geometric measurements to the residual geometric measurements to arrive at candidate geometric measurements.
  • a geometric measurement is one property chosen from a group comprising surface area, volume and linear measurement.
  • the measurement methods of the present teachings use a two- dimensional image and/or three-dimensional image to provide a wide- variety of measurement data.
  • the present measurement methods provide the dimensions of that component and/or subcomponent.
  • the present measurement methods provide length, area and volume of the detected defect.
  • the present measurement methods provide thickness and density values.
  • the image of component and/or subcomponent may be of higher resolution than the image of the object, e.g. , craft.
  • the above-described different methods may include a step of storing that stores the images obtained from an imaging step, stores resulting images after stitching and/or stores the residual obtained after a comparing step.
  • the identification and/or verification methods and measurements methods images obtained from imaging sensors with different field of views. In these methods, like the locating methods, the images having overlapping portions are stitched to form a single image that is subsequently used for identification, verification and/or measurement.
  • the processing system may include a server and user devices, such as computers and mobile devices that are programmed to allow users to communicate over a network.
  • the server may include one or more computers, memory, additional data storage devices (e.g. , database), interface and processor.
  • the server may produce programming instructions, files and/or data to carry out or facilitate implementation of the above-mentioned steps and that may be transmitted over the network to the user devices.
  • the processor typically executes a program stored in memory to accept input and/or provide output through the network interface from/to the user devices.
  • Each user device includes its own network interface, memory, processor, screen and input component.
  • the mobile device may be a wireless device and the network may include wireless communication to the device.
  • the wireless network is not limited to, but may include a cellular telephone network, a WiFi network or a WiMax network or a Blue Tooth network, a public switched telephone network (PSTN), the Internet and satellite.
  • PSTN public switched telephone network
  • a user may use the user device to enter or run a protocol, enter data, and/or analyze data stored on a server.

Abstract

Novel craft inspection facilities and methods relating thereto are described. The craft processing facility includes: (i) at least one member chosen from a group comprising one or more sidewalls, floor and roof, and wherein at least one of one or more sidewalls, floor or roof define a three-dimensional robotic envelope that is sufficiently spacious to receive at least a portion of a craft; (ii) one or more imaging sensors coupled to or disposed on any one member chosen from the group comprising one or more of sidewalls, floor and roof, and one or more of the imaging sensors operative to obtain a two-dimensional and/or a three-dimensional image of at least the portion of the craft; and (iii) a reference point serving as (0, 0, 0) coordinates of the three-dimensional robotic envelope.

Description

NOVEL SYSTEMS AND METHODS FOR PROCESSING AN OBJECT
RELATED APPLICATION
[0001] This application claims priority to U.S. Provisional Application Number 62/078,957 filed on November 12, 2014, which is incorporated herein by reference for all purposes.
FIELD
[0002] The present invention generally relates to processing objects. More particularly, the present invention relates to novel systems and methods that effectively allow processing, such as inspecting, manufacturing, maintaining, repairing, overhauling of objects (e.g. , crafts, vehicles, imaging sensors and other equipment that is used in a processing facility).
BACKGROUND
[0003] Depending on the application or use of certain objects, their processing, e.g. , inspecting, manufacturing, maintaining, repairing, overhauling, may be of great importance. By way of example, aircrafts, are subjected to significant forces, and depending on their mission sometimes significant G-forces. As a result, aircrafts, and generally crafts, are prone to developing structural defects, such as disbonding and delamination. Moreover, exposure to water and/or water vapor over time causes corrosion of the craft surface. This corrosion also compromises the structural integrity of the craft. Unfortunately, the conventional craft inspection processes that attempt to monitor and identify these defects are arduous and time consuming. Furthermore, these drawbacks are not limited to crafts; rather they also extend to other objects. What is, therefore, needed is a high throughput novels systems and methods for properly and reliably processing objects, such as crafts.
SUMMARY
[0004] To achieve the foregoing, the present arrangements and present teachings provide novel systems and methods for processing objects. In one aspect, the present arrangements provide craft processing facilities. In one embodiment, the present craft processing facility includes: (i) at least one member chosen from a group comprising one or more sidewalls, floor and roof, such that one or more of the sidewalls are adjacent to a craft during an inspection operation, the floor is disposed below one or more of the sidewalls and the roof is disposed above one or more of the sidewalls, and wherein at least one of one or more the sidewalls, the floor or the roof define a three-dimensional robotic envelope that is sufficiently spacious to receive at least a portion of the craft; (ii) one or more imaging sensors coupled to or disposed on any one member chosen from the group comprising one or more of sidewalls, floor and roof, and one or more of the imaging sensors operative to obtain a two-dimensional and/or a three- dimensional image of at least the portion of the craft; and (iii) a reference point serving as (0,0,0) coordinates of the three-dimensional robotic envelope. The reference point is not limited to any particular location. In exemplar implementations, the reference point is located at any one location chosen from a group comprising inside the three-dimensional robotic envelope, outside the three-dimensional robotic envelope and on the craft.
[0005] In certain embodiments of the present arrangements, the craft processing facility further comprising one or more sensor support structures, each of which supports one or more of the imaging sensors and is disposed on the floor and/or the roof of the craft processing facility. Sensor support structure may be any rigid structure that effectively supports the imaging sensors. By way of example, imaging sensors include components that enable both transmission and detection modes of operations. In this example, one or more of the imaging sensors include at least one member chosen from a group comprising laser, beam expander, receiving telescope, array of receivers and data processor.
[0006] It is important to note that a craft need not be inside the robotic envelope for the image sensors to capture a craft image. Rather, in one embodiment of the present arrangements, one or more of the imaging sensors have a field of view that captures a portion of the craft disposed outside the robotic envelope. As will be explained later such an embodiment is useful in identifying and verifying crafts and its components and/or subcomponents.
[0007] At least some of the present arrangements contemplate movement of the imaging sensors during an imaging operation. In one embodiment, the present craft processing facility includes a guy line, such that one or more of the imaging sensors are movably disposed on the guy line. In certain aspects of this embodiment, the craft processing facility further include markers that ascertain the position of one or more of the imaging sensors on the guy line. In this embodiment, the makers detect the signal transmitted by the imaging sensors and are, therefore, able to ascertain their position relative to the reference point.
[0008] In accordance with one embodiment, the craft processing facility of the present arrangements, further includes one or more rails such that one or more of the imaging sensors are movably disposed on each of the rails. In this embodiment, one or more of the imaging sensors and are capable of movement to locations on the rail that provide a field of view that does not capture blocked or shadowed areas of the craft. In another embodiment of the present arrangements, a single imaging sensor is used and this sensor captures up to a 360° field of view inside the robotic envelope.
[0009] In certain embodiments, the craft processing facility further includes robots that carry out one processing activity chosen from a group comprising inspection, manufacture, maintenance, repair and overhaul of an object (e.g. , a craft or a conveyance system).
[0010] In another aspect, the present teachings provide locations methods. One such exemplar location method includes: (i) receiving at least a portion of a craft inside a robotic envelope for processing; (ii) determining a position of an imaging sensor that is coupled to or disposed on at least one member chosen from a group comprising one or more sidewalls, floor and roof, such that one or more of the sidewalls are adjacent to the craft during the processing operation, the floor is disposed below one or more of the sidewalls and the roof is disposed above one or more of the sidewalls, and wherein at least one of one or more the sidewalls, the floor or the roof define a three-dimensional robotic envelope that is sufficiently spacious to receive at least a portion of the craft, and wherein the position is determined relative to a reference point serving as (0,0,0) coordinates of the three-dimensional robotic envelope; (iii) imaging at least a portion of the craft using the imaging sensors to develop a two-dimensional and/or a three-dimensional image of at least a portion of said craft, and wherein during the imaging, the imaging sensor is held stationary along X, Y and Z-axes at the position; and (iv) locating in space the craft relative to the reference point by using the two-dimensional image and/or the three-dimensional image.
[0011] As will be explained later, in some instances the imaging sensors need to be moved to another position. In such instances, the location method further includes: (i) moving the imaging sensor to another position; (ii) determining location of another position of the imaging sensor relative to the reference point of the three-dimensional robotic envelope; (iii) imaging at least the portion and/or another portion of the craft using the imaging sensor to develop another two- dimensional image and/or another three-dimensional image of at least the portion and/or the specified another portion of the craft, and wherein during the imaging, the imaging sensor is held stationary along X, Y and Z-axes at another position.
[0012] The present location methods may further include stitching the two-dimensional image and the specified another two-dimensional image and/or stitching the three-dimensional image and the specified another three-dimensional image to form a resulting two-dimensional image and/or a resulting three-dimensional image, respectively. In this embodiment, the present location methods further include confirming location in space of the craft and/or locating in space the specified another portion of the craft relative to the reference point by using the resulting two-dimensional image and/or the resulting three-dimensional image.
[0013] In one embodiment of the present teachings, when a single image is to be captured, the imaging sensor is, at a certain position, held stationary along X, Y and Z-axes, but is free to rotate in that position. Further, during an imaging operation, when multiple image frames may be collected from a single imaging sensor, the imaging sensor is free to move from one position to another.
[0014] In certain preferred embodiments, the present location methods further include: (i) imaging, using the imaging sensor, at least a portion of a component and/or a subcomponent of the craft to develop a two-dimensional image and/or a three-dimensional image of at least the portion of the component and/or the subcomponent, and wherein during the imaging, the imaging sensor is held stationary along X, Y and Z-axes at the position (of the imaging sensor); and (ii) locating in space the component and/or the subcomponent of the craft relative to the reference point by using the two-dimensional image and/or the three-dimensional image of at least the portion of the component and/or the subcomponent.
[0015] Once the component and/or the subcomponent of the craft is located in space, the present location methods may further still include: (i) mobilizing one or more robots, each having one or more of the sensors, to safety zones within the robotic envelope; and (ii) positioning one or more of the robots at approach points and/or at a standoff distance to commence processing of the component and/or the subcomponent. Representative processing techniques include at least one technique chosen from a group comprising inspecting, manufacturing, maintaining, repairing and overhauling.
[0016] Instead of moving a single sensor from one position to another, certain preferred embodiments of the present teachings rely on images obtained from two or more imaging sensors. In these embodiments, the location method further includes: (i) determining another position of another imaging sensor that is coupled to or disposed on at least one member chosen from the group comprising one or more sidewalls, floor and roof, and wherein the specified another position is determined relative to the reference point, and wherein the imaging sensor captures a field of view and the specified another imaging sensor captures another field of view that is not the same as the field of view, but the field of view and the specified another field of view include one or more overlapping portions; (ii) imaging at least the portion and/or another portion of the craft using the specified another imaging sensor to develop another two- dimensional image and/or another three-dimensional image of at least the portion and/or another portion of the craft, and wherein during the imaging step, the specified another imaging sensor is held stationary along X, Y and Z-axes at the specified another position, and wherein the two- dimensional image and the another two-dimensional image have two-dimensional overlapping portions and/or the three-dimensional image and the another three-dimensional image have three-dimensional overlapping portions; (iii) stitching the two-dimensional overlapping portions and/or the three-dimensional overlapping portions to form a resulting two-dimensional image and/or a resulting three-dimensional image, respectively; and (iv) locating in space the craft relative to the reference point by using the resulting two-dimensional image and/or the resulting three-dimensional image.
[0017] In yet another aspect, the present teachings provide methods of processing an object. One such exemplar method includes: (i) receiving at least a portion of an object inside a robotic envelope; (ii) determining a position of an imaging sensor that is coupled to or disposed on at least one member chosen from the group comprising one or more of sidewalls, floor and roof, and wherein the position is determined relative to a reference point serving as (0,0,0) coordinates of the three-dimensional robotic envelope; (iii) imaging at least a portion of the object using the imaging sensors to develop a two-dimensional image and/or a three-dimensional image of at least a portion of the object, and wherein during the imaging step, the imaging sensors are held stationary along X, Y and Z-axes at the position (of the imaging sensor); (iv) locating in space the object relative to the reference point by using the two-dimensional image and/or the three- dimensional image; (v) imaging at least a portion of one or more components and/or one or more subcomponents of the located object using the imaging sensors to develop a two-dimensional image and/or a three-dimensional image of at least the portion of one or more of the components and/or one or more of the subcomponents of the object, and wherein during the step of imaging at least the portion of one or more of the components and/or one or more of the subcomponents, the imaging sensors are held stationary along X, Y and Z-axes at the position (of the imaging sensor); (vi) locating in space one or more of the components and/or one or more of the subcomponents of the object relative to the reference point by using a two-dimensional image and/or a three-dimensional image of one or more of the components and/or one or more of the subcomponents of the object; and (vii) processing one or more of the components and/or one or more of the subcomponents of the object, wherein the processing includes at least one process chosen from a group comprising removing, installing, cleaning, surface preparing, coating, polishing or painting.
[0018] In yet another aspect, the present teachings provide methods of identification and/or verification. One such exemplar method includes: (i) obtaining a reference craft image; (ii) receiving a candidate craft within a field of view of one or more sensors; (iii) imaging the candidate craft to produce a candidate craft image; (iv) comparing the reference craft image with the candidate craft image to obtain a residual between the reference craft image and the candidate craft image; and (v) determining whether the residual is within a margin of error and identifying and/or verifying the candidate craft. Each of the reference craft image and the candidate craft image are preferably a two-dimensional image and/or a three-dimensional image. In one embodiment of the present teachings, the margin of error represents a value that is less than about 1 % deviation (in linear measurement units) from location of identifying or verifying features present in the reference craft image. In other embodiments, this margin of error is less than about 2%, about 3%, about 4% or about 5% deviation (in linear measurement units) from location of identifying or verifying features present in the reference craft image. [0019] In one embodiment of the present teachings, where the reference craft image is stored on a database, the step of obtaining the reference craft image includes retrieving the reference craft image from the database. In alternative embodiment of the present teachings, the step of obtaining the reference craft image includes: (i) receiving a reference craft within one or more of the field of views of one or more of the sensors; and (ii) imaging the reference craft to produce the reference craft image. As explained above, stitching different reference craft images obtained from different field of views may be required to produce the ultimate reference craft image that is used in subsequent steps. Regardless, in one implementation of this embodiment, the reference craft is the candidate craft and the reference craft image includes a prior candidate craft image obtained from the candidate craft prior to the above-mentioned step of receiving the candidate craft.
[0020] The methods of identification and/or verification may further include identifying presence or modification of one or more components and/or one or more sub-components on the candidate craft, and wherein one or more of the components and/or one or more of the subcomponents are not present or modified on the reference craft image. Alternatively, the identification and/or verification methods may further include identifying absence of one or more components and/or one or more sub-components on the candidate craft, and wherein one or more of the components and/or one or more of the subcomponents are present on the reference craft image.
[0021] There are numerous applications of the present identification and/or verification methods. In one preferred embodiment, the present identification and/or verification methods may further include identifying the model of one or more components and/or one or more subcomponents on the candidate craft. In another preferred embodiment of the identification and/or verification methods, the step of determining includes identifying and/or verifying torque and/or alignment of one or more components and/or one or more subcomponents in the candidate craft relative to torque and/or alignment of one or more of the components and/or one or more of the subcomponents in the reference craft.
[0022] To implement the above-mentioned comparing step, various algorithms may be used. By way of example, comparing the reference craft image with the candidate craft image includes implementing one algorithm chosen from a group comprising geometric recognition algorithm, photometric recognition algorithm and three-dimensional surface recognition algorithm.
[0023] In yet another aspect, the present teachings provide measurement methods. One such exemplar method includes: (i) receiving a candidate craft within one or more field of views of one or more imaging sensors; (ii) imaging the candidate craft to produce a candidate craft image; (iii) obtaining a reference craft image and one or more reference geometric measurements associated with the reference craft image; (iv) comparing the reference craft image with the candidate craft image and obtaining a residual between the reference craft image and the candidate craft image; (v) measuring one or more geometrical features of the residual to arrive at one or more residual geometric measurements associated with the residual; and (vi) adding or subtracting reference geometric measurements to the residual geometric measurements and arriving at candidate geometric measurements. By way of example, a geometric measurement is one property chosen from a group comprising surface area, volume and linear measurement. As a result, the measurement methods of the present teachings use a two-dimensional image and/or three-dimensional image to provide a wide-variety of measurement data. In the case of a component and/or subcomponent, the present measurement methods provide the dimensions of that component and/or subcomponent. As another example, in the case of a defect on a craft, the present measurement methods provide length, area and volume of the detected defect. As yet another example, in the case of paint and coatings on an object, the present measurement methods provide thickness and density values.
[0024] For a better understanding of the present arrangements and present teachings, their operating advantages attained by the different embodiments thereof, reference should be had to the accompanied drawings and descriptive matter of these different embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS
[0025] Figure 1 shows a perspective view of a craft inspection facility, according to one embodiment of the present arrangements, for imaging a craft using imaging sensor devices disposed on the sidewalls and floor of the craft inspection facility.
[0026] Figure 2 shows a perspective view of a craft inspection facility, according to another embodiment of the present arrangements and that is substantially similar to the craft inspection facility of Figure 1 , except does not include imaging sensor devices on the floor of the craft inspection facility.
[0027] Figure 3 shows a perspective view of a craft inspection area, according to one embodiment of the present arrangements, for inspecting a craft using imaging sensor devices, at least some of which are disposed on a sensor support structure.
[0028] Figure 4 shows a block diagram of certain components inside an imaging sensor device, according to one embodiment of the present arrangements and that allow the imaging sensor device to both transmit and receive light energy.
[0029] Figure 5 shows a block diagram of a positioning system, according to one embodiment of the present arrangements and that determines a position of an object, such as a craft and/or its components/subcomponents. [0030] Figure 6 a block diagram of a laser activating and recording system, according to one embodiment of the present arrangements and that activates a laser to obtain data that is recorded in a recording system.
[0031] Figure 7 shows a perspective view of a craft inspection facility, according to a yet another embodiment of the present arrangements and that is capable of identifying, positioning and measuring a craft outside the craft inspection facility.
[0032] Figure 8 shows a perspective view of a craft inspection facility, according to a yet another embodiment of the present arrangements and that includes imaging sensor devices, which are mobile on a cable (e.g. , a guy wire) and are capable of determining their location from markers located inside the craft inspection facility.
[0033] Figure 9 shows a perspective view of a craft inspection facility, according to a yet another embodiment of the present arrangements and that includes imaging sensor devices, which are mobile to avoid shadow or block areas.
[0034] Figure 10 shows a perspective view of a craft inspection facility, according to a yet another embodiment of the present arrangements and that includes an imaging sensor device having a field of view that is approximately 360°.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0035] In the following description numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without limitation to some or all of these specific details. By way of example, although Figures 1-10 describe processing, according to the present arrangements and teachings, of crafts, the present teachings are not so limited. The present arrangements and present teachings described herein extend generally to objects that may not be crafts. Examples of such objects include vehicles, equipment (e.g., imaging sensors, robots, and external objects inside a processing facility that are not imaging sensors and robots). In other instances, well known process steps have not been described in detail in order to not unnecessarily obscure the invention. For example, various well-known image recognition algorithms are described without providing details that are well known to those skilled in the image recognition art.
[0036] Figure 1 shows a perspective view of a craft inspection facility 100, according to one embodiment of the present arrangements and that is used for at least one of inspection, location, measurement and identification of a craft 108. By way of example, craft inspection facility 100 may be an airplane hangar as shown in Figure 1. Craft inspection facility 100 has dimensions that may be expressed in terms of three-linear axes, i.e., X, Y and Z. To this end, craft inspection facility 100 includes a robotic envelope 102 (e.g. , a robotic envelope that is capable of receiving at least a portion or an entire craft) that may be defined by one or more sidewalls, which lie in the X-Z and/or Y-Z planes.
[0037] Regardless of the planes that sidewalls may be disposed in, craft inspection facility 100 also includes one or more imaging sensor devices 104A-104H that are capable of imaging a portion of or an entire object, e.g. , a craft or inspection equipment (i. e. , robots or other nondestructive inspection equipment), that falls within a field of view 112. In an operative state of the imaging sensor devices 104A-104H, at least some of the imaging sensor devices have a field of view that, with respect to a point of reference 110, coordinates (0,0,0), captures an image of at least a portion of an object. As will be explained later, the different images obtained, with respect to point of reference 110, from at least some of imaging sensor devices 104A-104H are stitched together to form a three-dimensional image of the object, e.g. , craft 108. The present teachings believe that absolute coordinates of all locations on the object with respect to point of reference 110 are known so long as the positions of imaging sensor devices 104A-104H relative to the point of reference are known.
[0038] In one embodiment of the present craft inspection facilities and as shown in Figure 1, imaging sensor devices 106A-106B are disposed on the floor to image an underside of craft 108. It is important to note that imaging sensor devices 104B, 104C, 104F and 104G may also serve as floor-imaging sensor devices, but in those instances where a field of view of these imaging sensor devices is not able to capture an image because it covers a blocked and/or shadowed area, floor-imaging sensor devices 106A-106B are used. As a result, at least some of the imaging sensor devices 104A-104H and/or 106A-106B are used to capture a three-dimensional image of a craft. In one embodiment of the present arrangements, floor-imaging sensor devices are substantially similar to imaging sensor devices, except the floor-imaging sensor devices are disposed on the floor and imaging sensor devices need not be.
[0039] Regardless of whether floor-imaging sensor devices are necessary or not, as will be explained later, the above-described imaging sensor devices (whether or not on the floor) serve to inspect, identify, locate and measure an object, such as craft 108.
[0040] Figure 2 shows another craft inspection facility 200, according to another
embodiment of the present arrangements and that may be used for at least one of inspection, location, measurement and identification of a craft 208. Craft inspection facility 200 of Figure 2 is substantially similar to craft inspection facility 100 of Figure 1, i.e. , robotic envelope 102 and imaging sensor devices 104A-104H are substantially similar to their counterparts in Figure 2, i.e., robotic envelope 202 and imaging sensor devices 204A-204H, except craft inspection facility 200 of Figure 2 does not include floor sensors 106 A and 106B, as shown in Figure 1. Craft inspection facility 200 of Figure 2, like craft inspection facility 100 of Figure 1, may house craft 208, have a designated point of reference 210, i.e. , coordinates (0,0,0), and a field of view 212 associated with each of imaging sensor devices 204A-204H. It is important to note, however, that point of reference, coordinates (0,0,0), need not be inside robotic envelope 202 or for that matter, inside craft inspection facility 200. Rather point of reference, coordinates (0,0,0) may be located anywhere and so long as the distance from one or more of the imaging sensors and/or floor-imaging sensors to the point of reference is known, craft 208 and its components and/or subcomponents may be located in space.
[0041] Figure 3 shows a craft inspection area 300, according to one embodiment of the present arrangements and that may serve the same purpose as craft inspection facility 100 of Figure 1. As is shown in Figure 3, it is not necessary, in the present arrangements, to have sidewalls to effectively inspect, locate, measure and identify a craft 308. To this end, craft inspection area 300 includes a floor (e.g., an apron) 303 that supports a craft 308 and sensor support structures 322A-322E. Preferably each of sensor support structures 322A-322E is fitted with imaging sensor devices 324A-324E and floor imaging sensor devices 306A-306E. In other words, each of sensor support structures 322A-322E may have associated with it an imaging sensor device (i.e., one imaging sensor device chosen from 324A-324E) and a floor imaging sensor device (i.e., one floor imaging sensor device chosen from 306A-306E). Similar to each of imaging sensor devices 104A-104H and floor imaging sensor devices 106 A and 106B, each of imaging sensor devices 306A-306E and floor imaging sensor devices 306A-306E have a field of view 312. A point of reference 310, i.e., coordinates (0,0,0), is located on one of sensor support structures 322A-322E. By way of example, Figure 3 shows that point of reference 310 is located on sensor support structure 322C. It is noteworthy that it is not necessary that each sensor support structure include an imaging sensor device and/or a floor-imaging sensor device. It is important to note that although sidewalls are not present, a robotic envelope (not shown to simplify illustration, but that is similar to robotic envelopes 102 or 202 of Figures 1 and 2, respectively) is defined in the space above the floor or the apron.
[0042] Figures 4A and 4B show certain components inside an imaging sensor device, according to one embodiment of the present arrangements and that allow the imaging sensor device to both transmit and receive light energy, respectively. To this end, Figure 4A shows a configuration 400A of an imaging sensor device (e.g. , imaging sensor device 104A of Figure 1) or a floor imaging sensor device (e.g., floor imaging sensor devices 106A or 106B), emphasizing a transmission mode of operation. In this mode of operation, a laser 430A generates an incident beam of light 436A that passes through a beam spreader 432A that strikes and illuminates a portion of or an entire object 434A, (e.g. , a portion of or an entire craft 108 of Figure 1). A reflected light 436B resulting from illumination of object 434 A is produced and the components inside an imaging sensor device that receive reflected and/or scattered light 436B come into play.
[0043] Figure 4B is presented to show another configuration 400B of an imaging sensor device (e.g. , imaging sensor device 104 A of Figure 1) or a floor imaging sensor device (e.g., floor imaging sensor devices 106A or 106B) that emphasizes a receiving mode of operation. In this mode of operation, reflected and/or scattered light 436B, as the case may be, from object 434B is collected at a lensing structure 438B and projected onto an array of receivers 440B, where it is analyzed (e.g. , intensity and/or frequency of reflected and/or scattered light 436B is analyzed).
[0044] Figures 4A and 4B show certain components in their non-operative state. By way of example, Figure 4A shows that during the transmission mode of operation, lensing structure 438A and array of receivers 440A are not active. In this example, Figure 4B shows that during the detection mode of operation, laser 430B and beam spreader 432B are not active. This may be true in preferred embodiments of the present arrangements, when laser 430A is a pulsed laser. In these embodiments, each time a laser is pulsed a light beam is incident on a portion of or an entire craft during a transmission mode of operation, and a signal associated with a reflected and/or scattered light is then collected during the receiver mode of operation (that follows the transmission mode of operation).
[0045] As shown in Figure 4A, during the transmission mode of operation as laser 430A is activated, beam spreader 432B defines a field of view 436A to image object 434A. In the detection mode of operation, a resulting reflected and/or scattered light 436B arrives at receiving telescope 438A and a photodetector 440. Measurements, which may be in the form of electronic signals, obtained at photodetector 440 are conveyed and analyzed by a data processor 442B.
[0046] It is noteworthy that in other embodiments of the present arrangements, both modes of operation may be performed contemporaneously. In other words, both the transmission and detection mode of operation may be carried out at the same time and that certain components (shown as inactive in one mode operation in Figures 4A and 4B) are, in fact, active in both modes of operation. For example, if laser 430B is not pulsed and operates as a continuously beam generating device, then certain components may be active during both modes of operation.
[0047] Figure 5 shows a positioning system 500, according to one embodiment of the present arrangements and that determines a position of a craft and/or its components/subcomponents in space. Some of the components shown in Figures 4A and 4B are incorporated into positioning system 500. Positioning system 500 includes a laser 530A and beam expander 532A, which are substantially the same as their counterparts in Figure 4A, i.e., laser 430A and beam expander 432A. Furthermore, incident beam 536A generated inside positioning system 500 is substantially similar to incident beam 436A generated during transmission mode of operation, as shown by Figure 4 A, of an imaging sensor device (e.g. , imaging sensor device 104 A of Figure 1). Further, positioning system 500 includes components that process a resulting reflected and/or scattered light 536B, which is substantially similar to reflected and/or scattered light 436B. Accordingly, positioning system 500 includes receiving telescope 538A and a photodetector 540 that are examples of lensing structure 438A and array of receivers 440A, respectively, of Figure 4A.
[0048] Positioning system 500 also includes additional structure not described or suggested in Figures 4A and 4B. With respect to the transmission arrangement, positioning system 500 includes transmitting optics 544A and 544B, i.e. , mirrors, that direct incident beam 536A to strike an object 534, e.g. , a craft 108 of Figure 1. With respect to the receiving arrangement, positioning system includes an analog to digital converter 541 for converting the analog signal, which may be obtained from an array of detectors, such as photodetectors 540, to a digital signal that is processed by a signal and data processor 542. In one embodiment of the present teachings, signal and data processor 542 stitches images obtained from each of the imaging sensor devices (e.g. , image sensor devices 104A-104H and floor imaging sensor devices 106A and 106B) to form a single two-dimensional and/or a single three-dimensional image. Preferably, however, only images obtained from imaging sensor devices that have an overlapping field of view are stitched together to form the two-dimensional and/or three-dimensional image. The processed digital signal is then provided to and/or displayed at an output device 543, where the two- dimensional and/or the three-dimensional image may be displayed.
[0049] Figure 6 a block diagram of a laser activating and recording system 600, according to one embodiment of the present arrangements and that activates a laser 630 to generate an output signal, which is ultimately recorded. The output signal represents some information, e.g. , information regarding location, identification and measurement of objects, crafts, components and/or subcomponent, external objects (e.g. , imaging sensors, floor-imaging sensors, robots or equipment) that is the subject of an inquiry. Laser activating and recording system 600 includes laser 630, which is preferably a pulsed laser that generates an incident beam 636A in pulses. The incident beam generated at each laser pulse traverses some optical components to illuminate an object. A resulting reflected and/or scattered light 636B is collected at a telescopic mirror 638 and conveyed through a light guide 650 to a filter 652. After filtering background noise and other undesired information, reflected and/or scattered light 636B is measured by a detector 640 to generate a signal that is, preferably, commensurate with that measurement by detector 640. The signal stored and/or processed by a data logger 642 may be the signal that is produced by an imaging sensor device (e.g. , one of image sensing devices 104A-104H and floor imaging sensor devices 106 A and 106B of Figure 1). The data from the different ones of detectors and different ones of data logger, each associated with a specific imaging sensor device, are sent to a computer 643. Computer 643 produces a two-dimensional and/or a three-dimensional image of the object, with respect to a point of reference, by stitching images captured by different imaging sensor devices having overlapping field of views. As soon as the computer forms the two-dimensional and/or three-dimensional image of the object or determines that such an image may be formed, it sends out an instruction to fire another laser pulse and records another image of the object. In this manner, it is possible for laser activating and recording system 600 to track movement of an object from one location to another location.
[0050] By way of example, laser activating and recording system 600 effectively tracks various types of information, e.g. , inspection related information, such as defects and location, identification, verification and/or measurement of a craft, imaging sensors, external objects (e.g. , equipment) that may be present inside a craft inspection facility or a craft inspection area. Figure 7 shows various imaging sensor devices 704A-704I, each of which has its own field of view, that are disposed inside a robotic envelope 702 of a craft inspection facility 700. One of imaging sensor devices 7041 has a field of view 712 that captures an image of a front portion of craft 708 as it approaches robotic envelope 702, which is substantially similar to robotic envelope 102 shown in craft inspection facility 100 of Figure 1. In embodiment of the present arrangements, imaging sensor device 7041 includes a pulsed laser and laser activating and data recording system 600 of Figure 6. In this embodiment, imaging sensor device 7041 may be used for gathering information about approaching craft 708 relative to a point of reference 710, e.g. , coordinates (0,0,0) even though the craft has not yet arrived within the robotic envelope 702.
[0051] By way of example, by allowing a laser activating and recording system, through field of view 712 of imaging sensor devices 7041, to capture an image of a nose and a portion of fuselage of craft 708, enables identification of the craft. It is noteworthy that in one embodiment of the present teachings, if only an identification method is implemented, it is not necessary to designate a point of reference, coordinates (0,0,0). In the embodiment, however, a reference craft is used to obtain a reference image of the nose and a portion of the fuselage in field of view 712. Then, as will be explained in greater detail below, during identification of a candidate craft (which is subsequent to obtaining of the reference image), a candidate image is obtained by imaging a corresponding nose and portion of the fuselage of the candidate craft. A process of identifying the candidate craft then includes overlaying the reference image and the candidate image to then subtract out the candidate image from the reference image, or vice versa, to obtain a residual. Next, it is determined whether the residual is within the margin of error to identify the candidate craft. Such systems and techniques of the present teachings are used to evaluate alignment and torque of components and/or subcomponents of a candidate craft. [0052] Figure 8 shows a craft inspection facility 800, which is substantially similar to craft inspection facility 700 of Figure 7, i.e. , a robotic envelope 802, imaging sensor devices 804 A- 804D, and a point of reference 810 are substantially similar to their counterparts in Figure 7, i.e., robotic envelope 702, imaging sensor devices 704A-704I, and point of reference 710, except craft inspection facility 800 shows a craft 808 inside robotic envelope 802, an imaging sensor device 804D having a field of view 812 that is imaging a side portion of craft 808 and markers 854A-854C on a sidewall of robotic envelope 802. Furthermore, some of imaging sensor devices 804A and 804C are installed on a cable (also sometimes referred to as a "guy line") and other imaging sensor devices 804B and 804D are installed on another cable. Regardless of their orientation, imaging sensor devices 804A-804D are capable of movement along the associated cables, i.e., those cables upon which they rest and may be held during an operative state of the imaging sensor devices. To this end, markers 854A-854C serve in ascertaining the location of one or more of imaging sensor devices 804A-804D, relative to point of reference 810, that may have been displaced from one position to another along the cable. A displaced imaging sensor device may send an input signal to one or more of markers 854A-854C, which are held at a predetermined location inside or outside robotic envelope 802. The response signal(s), received from one or more of markers 854A-854C, provide information on a new location of the displaced imaging sensor device relative to point of reference 810. This method of determining the location of a displaced imaging sensor device may be thought of as "triangulation" of the imaging sensor device.
[0053] Regardless of whether an imaging sensor device is disposed on a floor and/or a sidewall of a robotic envelope or disposed on a cable, the present arrangements offer designs that allow for ascertaining the location of a displaced imaging sensor device inside a robotic envelope or outside the robotic envelope. One reason why it is important to displace an imaging sensor device is because a field of view of a particular imaging sensor device may be blocked or may capture a shadowed area and, therefore, provides no information about the subject object, such as a craft.
[0054] Figure 9 shows another movement configuration of an imaging sensor device.
According to this figure, a craft inspection facility 900, in accordance with one embodiment of the present arrangements, includes a robotic envelope 902 for inquiring information about a craft 908 relative to a point of reference 910. In this embodiment, an imaging sensor device is initially in a first position 904 where its field of view 912 is blocked or captures a shadowed area and, therefore, provides no information about a craft 908. To move the imaging sensor device from this undesired location to a desirable location, rails might be used to first move it down a certain vertical distance (where the imaging sensor's location is denoted by reference numeral 904') along a Z-axis and then a certain horizontal distance along an X-axis where the imaging sensor device 904" has a field of view 912" that is not blocked or does not capture a shadowed area. As a result of the displacement, field of view 912" at a new location, allows the imaging sensor device to provide information about the craft. Although markers (e.g. , markers 854A-854C of Figure 8) are not shown in this embodiment to simplify illustration, they may very well be used to ascertain the position of the displaced imaging sensor device so that more accurate information of the craft relative to the point of reference may be obtained.
[0055] In the embodiment shown in Figure 9, other than the movement of an imaging sensor device and the changing field of views, craft inspection facility 900 is substantially similar to craft inspection facility 200 of Figure 2. It is apparent from the description of the present arrangements, that it is not necessary to have multiple imaging sensor devices, and that a single imaging sensor device, deployed properly within the robotic envelope, may effectively provide information regarding a subject craft.
[0056] In connection with this teaching, Figure 10 shows a craft inspection facility 1000, according to one embodiment of the present arrangements, including a robotic envelope 1002, which in turn is designed to collect information about a craft 1008 using a single imaging sensor device 1004. In this embodiment, single imaging sensor device 1004 may be disposed on any structure 1005 that allows its movement, e.g. , cable or track, and has approximately a 360° field of view. Based on the this and other embodiments described herein, it is clear that the present object processing facilities (e.g. , craft inspection facilities) provide imaging sensors that have up to a 360° field of view.
[0057] Although craft inspection facility 1000, robotic envelope 1002, imaging sensor devices 1004, field of view 1012, point of reference 1010 is substantially similar to their counterparts in other figures presented herein, it is noteworthy that these arrangements discussed inside a craft inspection facility may very well be carried out using one or more sensor support structures, e.g. , sensor support structures 322A-322E, on a craft inspection area, e.g. , craft inspection area 303, that does include sidewalls. In other words, embodiments described herein that appear to require one or more robotic envelopes may well be implemented inside a craft inspection area (e.g. , craft inspection area 300 of Figure 3), which does not have sidewalls.
[0058] It is also noteworthy that the imaging sensor device, as contemplated in Figures 1-3 and 7-10, may include certain relevant components shown in Figures 4, 5 and 6 that facilitate in providing information, e.g. , at least one of inspection (defect information), location,
measurement and identification of an object, such as a craft or an external object inside the craft inspection facility, that may the subject of an inquiry. The configurations and structures shown in Figures 1 -10 are not necessary, but in a preferred embodiment of the present teachings may be used to carry out methods, among others, for processing an object.
[0059] The present teachings provide, among other methods, methods for locating an object (e.g. , a craft) in space. According to certain embodiment of the present teachings, this method is not necessarily carried out using the structures and/or configurations shown in Figures 1 -10.
Method of locating preferably begins with a step of receiving at least a portion of a craft inside a robotic envelope (e.g. , craft in Figures 1-3 and 8-10) for processing. Next, a step of determining is carried out. In this step, a position of an imaging sensor (e.g. , imaging sensors in Figures 1-3 and 8-10) is determined. The imaging sensor may be coupled to or disposed on at least one member chosen from a group comprising one or more sidewalls, floor and roof. During a craft processing operation that is described below, one or more of the sidewalls are adjacent to the craft. Further, the floor is disposed below one or more of the sidewalls, and the roof is disposed above one or more of the sidewalls. In this configuration, at least one of one or more the sidewalls, the floor or the roof define a three-dimensional robotic envelope (e.g. , robotic envelope in Figures 1 -3 and 8- 10) that is sufficiently spacious to receive at least a portion of the craft. In the determining step, the position of the imaging sensor is determined relative to a reference point serving as (0,0,0) coordinates of the three-dimensional robotic envelope.
[0060] Once the position of the imaging sensor relative to the reference point is determined, an imaging step may commence. In the imaging step, at least a portion of the craft is imaged using the imaging sensors to develop a two-dimensional and/or a three-dimensional image of at least a portion of the craft. It is important to keep in mind, however, that during the imaging step when the imaging sensor is at a particular position, the imaging sensor is preferably held stationary along X, Y and Z-axes at that position. In one embodiment of the present teachings, however, while the imaging sensor is held stationary along X, Y and Z-axes at a particular position, it is free to rotate in that position.
[0061] Finally, a locating step is initiated. Specifically, using the two-dimensional image and/or the three-dimensional image obtained from the previous imaging step, the craft is located in space relative to the reference point. In other words, the images obtained from the imaging step provide the coordinates of the detected object. To this end, the disclosure pertaining to Figures 4, 5 and 6 describes one exemplar manner by which the coordinates of the detected object are obtained.
[0062] In those instances where an imaging sensor needs to move from one position to another (e.g. , imaging sensor move on guy lines in Figure 8 and/or move on a rail or a track in Figure 9), the present teachings provide other embodiments of methods of locating. In one such exemplar embodiment, after the above-mentioned steps of receiving, determining, imaging and locating have concluded, the location method proceeds to a step of moving the imaging sensor to another or new position. Next, a determining step includes determining the new location or position of the imaging sensor (at another position) relative to the reference point of the three- dimensional robotic envelope.
[0063] Then another imaging step is carried. This imaging step includes imaging at least the portion (that was imaged at the previous position) and/or another portion of the craft using the imaging sensor. As a result, another two-dimensional image and/or another three-dimensional image of at least the portion and/or another portion of the craft is/are obtained. As it was during the previous imaging step, the imaging sensor, in this imaging step also, is held stationary along X, Y and Z-axes at the imaging sensor' s new location or position.
[0064] If desired, the present teachings provide for processing the images obtained from the first imaging step (when the imaging sensor was at the first position) and images obtained from the second imaging step (when the imaging sensor was the second or new position) to produce a resulting image. Specifically, the method of location may include stitching the first two- dimensional image and the second two-dimensional image and/or stitching the first three- dimensional image and the second three-dimensional image to form a resulting two-dimensional image and/or a resulting three-dimensional image, respectively. In this embodiment, the present location methods preferably further include confirming location in space of the craft and/or locating in space another portion of the craft (which was imaged during the second imaging step) relative to the reference point by using the resulting two-dimensional image and/or the resulting three-dimensional image.
[0065] The present teachings also provide for locating one or more components and/or one or more subcomponents of an object, e.g. , a craft, in space relative to the reference point. To this end, certain preferred embodiments of the present location methods further include imaging, using the above-mentioned imaging sensor, at least a portion of a component and/or a subcomponent of the craft to develop a two-dimensional image and/or a three-dimensional image of at least the portion of the component and/or the subcomponent. Again, during this imaging step, the imaging sensor is preferably held stationary along X, Y and Z-axes. By using the two- dimensional image and/or the three-dimensional image of at least the portion of the component and/or the subcomponent, the next step of locating involves locating in space the component and/or the subcomponent of the craft relative to the reference point.
[0066] Once the component and/or the subcomponent of the craft is/are located in space, the present teachings provide for carrying out many different types of processes. To effect processing of the component and/or the subcomponent, the method of location or another process method may be implemented. Regardless of which method is applied, the present teachings provide that after the component and/or the subcomponent of the craft is/are located in space, a step of mobilizing is carried out. This step includes mobilizing one or more robots, each having one or more of the sensors to safety zones within the robotic envelope. Next, a positioning step includes positioning one or more of the robots at approach points or at standoff distance (that is associated with a robot dedicated to particular inspection method) to commence processing of the component and/or the subcomponent. Representative processing techniques include inspecting, manufacturing, repairing and overhauling.
[0067] Instead of moving a single sensor from one position to another, certain preferred embodiments of the present teachings rely on images obtained from two or more imaging sensors. In these embodiments, after the above-mentioned steps of receiving, determining, imaging and locating described in connection with the first imaging sensor at its first position have concluded, then the location method may proceed to a determining step. This step involves determining relative to the same reference point as the first imaging sensor, another or a new position of another or new imaging sensor, which is coupled to or disposed on at least one of— one or more sidewalls, floor or roof. In this determining step, the imaging sensor captures a new field of view that is different from the field of view captured by the first imaging sensor.
However, it is possible that the two different fields of views include one or more overlapping portions.
[0068] After this determining step has concluded, an imaging step may commence. This imaging step includes imaging at least the portion and/or another portion of the craft using the new or second imaging sensor to develop another two-dimensional image and/or another three- dimensional image of at least the portion and/or another portion of the craft. Again, during the imaging step, the second imaging sensor is held stationary along X, Y and Z-axes (at the position of the second imaging sensor). In this embodiment, the first two-dimensional image and/or the first three-dimensional image obtained from the first imaging sensor and the second two- dimensional image and/or the second three-dimensional image obtained from the second imaging sensor have two-dimensional overlapping portions and/or three-dimensional overlapping portions.
[0069] Next, in a stitching step, the two-dimensional overlapping portions and/or the three- dimensional overlapping portions are stitched to form a resulting two-dimensional image and/or a resulting three-dimensional image, respectively. Using this resulting two-dimensional image and/or the resulting three-dimensional image, the present teachings allow locating in space the craft relative to the reference point. Using a combination of the steps described above, the present teachings recognize that, using two or more imaging sensors, a component and/or a subcomponent of the craft is also located relative to a reference point in space. [0070] The present methods are not limited to crafts and extend to objects generally. In certain embodiments, the present teachings provide methods of processing an object. One such exemplar method begins with a step of receiving at least a portion of an object inside a robotic envelope. Next, a step of determining includes determining a position of an imaging relative to a reference point serving as (0,0,0) coordinates of the three-dimensional robotic envelope. The imaging sensor may be coupled to or disposed on at least one of— one or more sidewalls, floor or roof. Then, an imaging step involves imaging at least a portion of the object using the imaging sensors to develop a two-dimensional image and/or a three-dimensional image of at least a portion of the object. As mentioned above, during the imaging step, the imaging sensors are held stationary along X, Y and Z-axes at the position (of the imaging sensor). Using the two- dimensional image and/or the three-dimensional image obtained from the imaging step, certain embodiments of the present methods proceed to locating in space the object relative to the reference point.
[0071] Once the object is located in space, the present teachings recognize that the next steps may focus on one or more components and/or one or more subcomponents of the object. As a result, an imaging step includes imaging at least a portion of one or more components and/or one or more subcomponents of the located object using the imaging sensors to develop a two- dimensional image and/or a three-dimensional image of at least the portion of one or more of the components and/or one or more of the subcomponents of the object. During this imaging step, the imaging sensors are held stationary along X, Y and Z-axes at the position (of the imaging sensor). Using the two-dimensional image and/or the three-dimensional image of one or more of the components and/or one or more of the subcomponents of the object, certain embodiments of the present teachings proceed to locating in space one or more of the components and/or one or more of the subcomponents of the object relative to the reference point. At this stage, one or more of the components and/or one or more of the subcomponents of the object are ready to undergo processing. Representative processing techniques include any one of removing, installing, cleaning, surface preparing, coating, polishing or painting.
[0072] In certain embodiments of the present teachings, it is not necessary to rely on a reference point to effect processing. To this end, the present teachings provide methods of identification and/or verification. One such exemplar method begins with a step of obtaining a reference craft image. Next, a receiving step includes receiving a candidate craft within a field of view of one or more sensors. Then, an imaging step includes imaging the candidate craft to produce a candidate craft image. At this stage, one exemplar identification and/or verification method proceeds to a comparing step that includes comparing the reference craft image with the candidate craft image to obtain a residual between the reference craft image and the candidate craft image. Then, a determining step inquires whether the residual is within a margin of error and identifying and/or verifying the candidate craft. The reference craft image and the candidate craft image may be preferably a two-dimensional image and/or a three-dimensional image. In one embodiment of the present teachings, the margin of error represents a value that is less than about 1 % deviation (in linear measurement units) from location of identifying or verifying features present in the reference craft image. In other embodiments, this margin of error is less than about 2%, about 3%, about 4% or about 5% deviation (in linear measurement units) from location of identifying or verifying features present in the reference craft image.
[0073] In one embodiment of the present teachings, the reference craft image is stored on a database. In this embodiment, the step of obtaining includes retrieving the reference craft image from the database.
[0074] In alternative embodiment of the present teachings, the step of obtaining the reference craft image begins with a step of receiving a reference craft within one or more of the field of views of one or more of the sensors. Next, a step of imaging involves imaging the reference craft to produce the reference craft image. As explained above, stitching different reference craft images obtained from different field of views may be required to produce the ultimate reference craft image that is used in subsequent steps. Regardless, in one implementation of this embodiment, the reference craft is the candidate craft and the reference craft image includes a prior candidate craft image obtained from the candidate craft prior to the above-mentioned step of receiving the candidate craft. Stated another way, the prior candidate craft image may be obtained during the candidate craft's prior visit. In this implementation, as a result, a reference craft or its image is not needed and the candidate craft itself is used to generate the reference image.
[0075] The methods of identification and/or verification may further include identifying presence or modification of one or more components and/or one or more sub-components on the candidate craft or its image, when they were not present or modified on the reference craft image. Alternatively, the identification and/or verification methods may further include identifying absence of one or more components and/or one or more sub-components on the candidate craft or its image, when they were present on the reference craft image.
[0076] There are numerous applications of the present identification and/or verification methods. In one preferred embodiment, the present identification and/or verification methods may further include identifying the model of one or more components and/or one or more subcomponents on the candidate craft. By way of example, an image captured from one or more imaging sensors that are positioned proximate to a location that displays the craft's identification information, e.g. , tail number, model number and/or serial information, effectively provide the identification information for the craft.
[0077] In another preferred embodiment of the identification and/or verification methods, the step of determining includes identifying and/or verifying torque and/or alignment of one or more components and/or one or more subcomponents in the candidate craft relative to torque and/or alignment of one or more of the components and/or one or more of the subcomponents in the reference craft. By way of example, during an inspection process of an aircraft, the determining step establishes whether a horizontal stabilizer component of the aircraft is out of alignment. As another example, the determining step established whether a fuselage component of the aircraft is torqued relative to the aircraft' s prior inspection. As yet another example, the determining step establishes if another subcomponent, e.g. , winglets, are added to a component, e.g. , wings, since the aircraft's prior visit. Establishing the presence and/or absence of these conditions of the craft and/or additions or subtractions from the craft prevent robots, if used, from colliding with the craft and also allow for effective robotic pathways that increase throughput.
[0078] In the above-mentioned comparing step, two images— reference craft image and candidate craft— are electronically overlaid. Before, the two images are overlaid, however, this step may use certain recognition algorithms that identify surface features by extracting landmarks, or features, from an image of the candidate craft's and/or reference craft's surface. For example, if an algorithm may analyze the relative position, size, and/or shape of a component and/or subcomponent on the candidate craft, then these features may be used to search for other images of a reference craft with matching features. Other algorithms normalize a gallery of surface images and then compress the surface recognition data, only saving the data in the image that is useful for candidate craft recognition. A candidate craft's image data is then compared with a reference craft' s image data. One of the recognition methods is based on template matching techniques applied to a set of salient craft surface or perimeter features, providing a sort of compressed craft representation.
[0079] Recognition algorithms can be divided into two main approaches, geometric, which looks at distinguishing features, or photometric, which is a statistical approach that distills an image into values and compares the values with templates to eliminate variances. Representative recognition algorithms include Principal Component Analysis using eigenfaces, Linear
Discriminate Analysis, Elastic Bunch Graph Matching using the Fisherface algorithm, the Hidden Markov model, the Multilinear Subspace Learning using tensor representation, or the neuronal motivated dynamic link matching.
[0080] The present teachings recognize that improved accuracies during the comparison step may be achieved by using the three-dimensional surface recognition technique. This technique uses 3D sensors to capture information about the shape of a craft, and/or component and/or subcomponent of the craft. This information is then used to identify distinctive features on the surface of the craft and/or its component and/or subcomponent, such as the contour of the cockpit windows, nose, and crown above the cabin area.
[0081] One advantage of three-dimensional surface recognition technique is that it is not affected by changes in lighting like other techniques. It can also identify a craft from a range of external viewing angles, including a profile view.[Three-dimensional data points from a craft vastly improve the precision of craft surface recognition. The three-dimensional recognition technique is enhanced by the development of sophisticated imaging sensors that do a better job of capturing three-dimensional craft surface imagery. The sensors work by projecting structured light (e.g. , from a laser source 430A, 530A or 630 of Figures 4, 5 and 6, respectively) onto the craft surface. Up to a dozen or more of these image sensors may be placed on the same CMOS chip— such that each image sensor captures a different part of the spectrum.
[0082] Relying on such image processing techniques, the present teachings provide measurement methods. One such exemplar method begins with receiving a candidate craft within one or more field of views of one or more imaging sensors. Then a step of imaging involves imaging the candidate craft to produce a candidate craft image. Next, an obtaining step includes obtaining a reference craft image and one or more reference geometric measurements associated with the reference craft image. Next, a comparing step requires that the reference craft image be compared with the candidate craft image to obtain a residual between the reference craft image and the candidate craft image. The exemplar method then proceeds to a measuring step, in which one or more geometrical features of the residual are measured to arrive at one or more residual geometric measurements associated with the residual. Finally, an adding or subtracting step includes adding or subtracting reference geometric measurements to the residual geometric measurements to arrive at candidate geometric measurements. By way of example, a geometric measurement is one property chosen from a group comprising surface area, volume and linear measurement. As a result, the measurement methods of the present teachings use a two- dimensional image and/or three-dimensional image to provide a wide- variety of measurement data. In the case of a component and/or subcomponent, the present measurement methods provide the dimensions of that component and/or subcomponent. As another example, in the case of a defect on a craft, the present measurement methods provide length, area and volume of the detected defect. As yet another example, in the case of paint and coatings on an object, the present measurement methods provide thickness and density values.
[0083] Although not necessary, the image of component and/or subcomponent may be of higher resolution than the image of the object, e.g. , craft. Moreover, the above-described different methods may include a step of storing that stores the images obtained from an imaging step, stores resulting images after stitching and/or stores the residual obtained after a comparing step. Further, the identification and/or verification methods and measurements methods images obtained from imaging sensors with different field of views. In these methods, like the locating methods, the images having overlapping portions are stitched to form a single image that is subsequently used for identification, verification and/or measurement.
[0084] Further, the different steps of determining, imaging, locating, moving, mobilizing, positioning, stitching, processing, obtaining, receiving, comparing, determining, measuring and adding and/or subtracting are preferably carried out using a processing system. The processing system may include a server and user devices, such as computers and mobile devices that are programmed to allow users to communicate over a network. In general, the server may include one or more computers, memory, additional data storage devices (e.g. , database), interface and processor. The server may produce programming instructions, files and/or data to carry out or facilitate implementation of the above-mentioned steps and that may be transmitted over the network to the user devices. Further, the processor typically executes a program stored in memory to accept input and/or provide output through the network interface from/to the user devices.
[0085] Each user device includes its own network interface, memory, processor, screen and input component. The mobile device may be a wireless device and the network may include wireless communication to the device. The wireless network is not limited to, but may include a cellular telephone network, a WiFi network or a WiMax network or a Blue Tooth network, a public switched telephone network (PSTN), the Internet and satellite. A user may use the user device to enter or run a protocol, enter data, and/or analyze data stored on a server.
[0086] Although illustrative embodiments of this invention have been shown and described, other modifications, changes, and substitutions are intended. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the scope of the disclosure, as set forth in the following claims.

Claims

CLAIMS What is claimed is:
1. A craft processing facility comprising:
at least one member chosen from a group comprising one or more sidewalls, floor and roof, such that one or more of said sidewalls are adjacent to a craft during a processing operation, said floor is disposed below one or more of said sidewalls and said roof is disposed above one or more of said sidewalls, and wherein at least one of one or more said sidewalls, said floor or said roof define a three-dimensional robotic envelope that is sufficiently spacious to receive at least a portion of said craft;
one or more imaging sensors coupled to or disposed on any one member chosen from said group comprising one or more of sidewalls, floor and roof, and one or more of said imaging sensors operative to obtain a two-dimensional and/or a three-dimensional image of at least said portion of said craft; and
a reference point serving as (0,0,0) coordinates of said three-dimensional robotic envelope.
2. The craft processing facility of claim 1, wherein said reference point is located at any one location chosen from a group comprising inside said three-dimensional robotic envelope, outside three-dimensional robotic envelope and on said craft.
3. The craft processing facility of claim 1, further comprising one or more sensor support structures, each of which supports one or more of said imaging sensors and is disposed on said floor and/or said roof of said craft processing facility.
4. The craft processing facility of claim 1, wherein one or more of said imaging sensors include at least one member chosen from a group comprising laser, beam expander, receiving telescope, array of receivers and data processor.
5. The craft processing facility of claim 1, wherein one or more of said imaging sensors have a field of view that captures a portion of said craft disposed outside said robotic envelope.
6. The craft processing facility of claim 1, further comprising a guy line, and wherein one or more of said imaging sensors are movably disposed on said guy line.
7. The craft processing facility of claim 1, further comprising markers that ascertain the position of one or more of said imaging sensors on said guy line.
8. The craft processing facility of claim 1, further comprising one or more rails, and wherein one or more sensors are movably disposed on each of said rails such that one or more of said sensors are capable of movement to locations on said rail that provide a field of view that does not capture blocked or shadowed areas of said craft.
9. The craft processing facility of claim 1, wherein one or more of said imaging sensor includes a single imaging sensor that has up to a 360° field of view inside said robotic envelope.
10. The craft processing facility of claim 1, wherein said craft is a vehicle.
11. The craft processing facility of claim 1 , further comprising robots that carry out one processing activity chosen from a group comprising inspection, manufacture, maintenance, repair and overhaul.
12. A location method comprising:
receiving at least a portion of a craft inside a robotic envelope;
determining a position of an imaging sensor that is coupled to or disposed on at least one member chosen from a group comprising one or more sidewalls, floor and roof, such that one or more of said sidewalls are adjacent to said craft during a processing operation, said floor is disposed below one or more of said sidewalls and said roof is disposed above one or more of said sidewalls, and wherein at least one of one or more said sidewalls, said floor or said roof define a three-dimensional robotic envelope that is sufficiently spacious to receive at least a portion of said craft, and wherein said position is determined relative to a reference point serving as (0,0,0) coordinates of said three-dimensional robotic envelope;
imaging at least a portion of said craft using said imaging sensor to develop a two- dimensional and/or a three-dimensional image of at least a portion of said craft, and wherein during said imaging, said imaging sensor is held stationary along X, Y and Z-axes at said position; and
locating in space said craft relative to said reference point by using said two-dimensional image and/or said three-dimensional image.
13. The location method of claim 12, further comprising:
moving said imaging sensor to another position;
determining location of said another position of said imaging sensor relative to said reference point of said three-dimensional robotic envelope; and
imaging at least said portion and/or another portion of said craft using said imaging sensor to develop another two-dimensional and/or another three-dimensional image of at least said portion and/or said another portion of said craft, and wherein during said imaging, said imaging sensor is held stationary along X, Y and Z-axes at said another position.
14. The location method of claim 13, further comprising stitching said two-dimensional image and said another two-dimensional image and/or said three-dimensional image and said another three-dimensional image to form a resulting two-dimensional image and/or a resulting three-dimensional image, respectively.
15. The location method of claim 14, further comprising confirming location in space of said craft and/or locating in space said another portion of said craft relative to said reference point by using said resulting two-dimensional image and/or said resulting three-dimensional image.
16. The location method of claim 12, wherein said during imaging, said imaging sensor rotates at said position.
17. The location method of claim 12, further comprising:
imaging, using said imaging sensor, at least a portion of a component and/or a subcomponent of said craft to develop a two-dimensional image and/or a three-dimensional image of at least said portion of said component and/or said subcomponent, and wherein during said imaging, said imaging sensor is held stationary along X, Y and Z-axes at said position; and locating in space said component and/or said subcomponent of said craft relative to said reference point by using said two-dimensional image and/or said three-dimensional image of at least said portion of said component and/or said subcomponent.
18. The location method of claim 17, further comprising:
mobilizing one or more robots, each having one or more of said sensors, to safety zones within said robotic envelope; and
positioning one or more of said robots at approach points and/or at a standoff distance to commence processing of said component and/or said subcomponent.
19. The location method of claim 18, wherein said processing includes at least one technique chosen from a group comprising inspecting, manufacturing, maintaining, repairing and overhauling.
20. The location method of claim 12, further comprising:
determining another position of another imaging sensor that is coupled to or disposed on at least one member chosen from said group comprising one or more sidewalls, floor and roof, and wherein said another position is determined relative to said reference point, and wherein said imaging sensor captures a field of view and said another imaging sensor captures another field of view that is not the same as said field of view, but said field of view and said another field of view include one or more overlapping portions;
imaging at least said portion and/or another portion of said craft using said another imaging sensor to develop another two-dimensional and/or another three-dimensional images of at least said portion and/or another portion of said craft, and wherein during said imaging, said another imaging sensor is held stationary along X, Y and Z-axes at said another position, and wherein said two-dimensional image and said another two-dimensional image include two- dimensional overlapping portions and/or said three-dimensional image and said another three- dimensional image include three-dimensional overlapping portions; stitching said two-dimensional overlapping portions and/or said three-dimensional overlapping portions to form a resulting two-dimensional image and/or a resulting three- dimensional image, respectively
locating in space said craft relative to said reference point by using said resulting two- dimensional image and/or said resulting three-dimensional image.
21. A method of processing an object, said method comprising:
receiving at least a portion of a craft inside a robotic envelope for processing;
determining a position of an imaging sensor that is coupled to or disposed on at least one member chosen from said group comprising one or more of sidewalls, floor and roof, such that one or more of said sidewalls are designed to be adjacent to said craft during said processing, said floor is disposed below one or more of said sidewalls and said roof is disposed above one or more of said sidewalls, and wherein at least one of one or more said sidewalls, said floor or said roof define a three-dimensional robotic envelope that is sufficiently spacious to receive at least a portion of said craft, and wherein said position is determined relative to a reference point serving as (0,0,0) coordinates of said three-dimensional robotic envelope;
imaging at least a portion of said object using said another imaging sensor to develop a two-dimensional image and/or a three-dimensional image of at least a portion of said object, and wherein during said imaging, said imaging sensors are held stationary along X, Y and Z-axes at said position;
locating in space said object relative to said reference point by using said two- dimensional image and/or said three-dimensional image;
imaging at least a portion of one or more components and/or one or more subcomponents of said object using said imaging sensors to develop a two-dimensional image and/or a three- dimensional image of at least a portion of one or more of said components and/or one or more of said subcomponents of the object, and wherein during said imaging of at least said portion of one or more of said components and/or one or more of said subcomponents, the imaging sensor is held stationary along X, Y and Z-axes at said position;
locating in space one or more of said components and/or one or more of said
subcomponents of said object relative to said reference point by using a two-dimensional image and/or a three-dimensional image of one or more of said components and/or one or more of said subcomponents of said object;
processing one or more of said components and/or one or more of said subcomponents of said object, wherein said processing includes at least one technique chosen from a group comprising removing, installing, cleaning, surface preparing, coating, polishing and painting.
22. An identification and/or verification method comprising: obtaining a reference craft image;
receiving a candidate craft within a field of view of one or more sensors;
imaging said candidate craft to produce a candidate craft image;
comparing said reference craft image with said candidate craft image to obtain a residual between said reference craft image and said candidate craft image; and
determining whether said residual is within a margin of error and identifying and/or verifying said candidate craft.
23. The identification and/or verification method of claim 22, wherein said reference craft image and said candidate craft image is a two-dimensional image and/or a three-dimensional image.
24. The identification and/or verification method of claim 22, wherein said margin of error represents a value that is less than about 1% deviation (in linear measurement units) from location of identifying or verifying features present in reference craft image.
25. The identification and/or verification method of claim 22, wherein said obtaining said reference craft image includes retrieving said reference craft image from a database.
26. The identification and/or verification method of claim 22, wherein said obtaining said reference craft image includes:
receiving a reference craft within one or more of said field of views of one or more of said sensors; and
imaging said reference craft to produce said reference craft image.
27. The identification and/or verification method of claim 26, wherein said reference craft is said candidate craft and said reference craft image includes a prior candidate craft image obtained, from said candidate craft, prior to said receiving said candidate craft.
28. The identification and/or verification method of claim 22, further comprising identifying presence or modification of one or more components and/or one or more sub-components on said candidate craft, and wherein one or more of said components and/or one or more of said subcomponents not being present or modified on said reference craft image.
29. The identification and/or verification method of claim 22, further comprising determining absence of one or more components and/or one or more sub-components on said candidate craft, and wherein one or more of said components and/or one or more of said subcomponents being present on said reference craft image.
30. The identification and/or verification method of claim 22, further comprising identifying model of one or more components and/or one or more sub-components on said candidate craft.
31. The identification and/or verification method of claim 22, wherein said determining includes identifying and/or verifying torque and/or alignment of one or more components and/or one or more subcomponents in said candidate craft relative to torque and/or alignment of one or more of said components and/or one or more of said subcomponents in said reference craft.
32. The identification and/or verification method of claim 22, wherein said comparing said reference craft image with said candidate craft image includes implementing one algorithm chosen from a group comprising geometric recognition algorithm, photometric recognition algorithm and three-dimensional surface recognition algorithm.
33. A measurement method comprising:
receiving a candidate craft within one or more field of views of one or more imaging sensors;
imaging said candidate craft to produce a candidate craft image;
obtaining a reference craft image and one or more reference geometric measurements associated with said reference craft image;
comparing said reference craft image with said candidate craft image and obtaining a residual between said reference craft image and said candidate craft image;
measuring one or more geometrical features of said residual to arrive at one or more residual geometric measurements associated with said residual; and
adding or subtracting reference geometric measurements to said residual geometric measurements and arriving at candidate geometric measurements.
34. The measurement of claim 33, wherein a geometric measurement is one property chosen from a group comprising surface area, volume and linear measurement.
PCT/US2015/060487 2014-11-12 2015-11-12 Novel systems and methods for processing an object WO2016077653A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462078957P 2014-11-12 2014-11-12
US62/078,957 2014-11-12

Publications (1)

Publication Number Publication Date
WO2016077653A1 true WO2016077653A1 (en) 2016-05-19

Family

ID=55955089

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/060487 WO2016077653A1 (en) 2014-11-12 2015-11-12 Novel systems and methods for processing an object

Country Status (1)

Country Link
WO (1) WO2016077653A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2616008A (en) * 2022-02-23 2023-08-30 Degould Ltd Measuring station

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6285320B1 (en) * 1999-09-03 2001-09-04 Sikorsky Aircraft Corporation Apparatus and method for mapping surfaces of an object
WO2002018958A2 (en) * 2000-08-25 2002-03-07 Aerobotics, Inc. Non-destructive inspection, testing and evaluation system for intact aircraft and components and methods therefore
US20070283582A1 (en) * 2004-03-18 2007-12-13 Karin Donner Measuring Method and Measuring Unit for Determining the Spatial Position of a Wheel Rim as Well as a Wheel Alignment Measuring System
WO2010010379A1 (en) * 2008-07-21 2010-01-28 Autotrakker Limited Cargo measurement using a range finder
US20130194417A1 (en) * 2012-01-31 2013-08-01 Dr. Ing. H.C.F. Porsche Aktiengesellschaft Evaluation unit, evaluation method, measurement system for a crash test vehicle measurement and a method for performing a crash test vehicle measurement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6285320B1 (en) * 1999-09-03 2001-09-04 Sikorsky Aircraft Corporation Apparatus and method for mapping surfaces of an object
WO2002018958A2 (en) * 2000-08-25 2002-03-07 Aerobotics, Inc. Non-destructive inspection, testing and evaluation system for intact aircraft and components and methods therefore
US20070283582A1 (en) * 2004-03-18 2007-12-13 Karin Donner Measuring Method and Measuring Unit for Determining the Spatial Position of a Wheel Rim as Well as a Wheel Alignment Measuring System
WO2010010379A1 (en) * 2008-07-21 2010-01-28 Autotrakker Limited Cargo measurement using a range finder
US20130194417A1 (en) * 2012-01-31 2013-08-01 Dr. Ing. H.C.F. Porsche Aktiengesellschaft Evaluation unit, evaluation method, measurement system for a crash test vehicle measurement and a method for performing a crash test vehicle measurement

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2616008A (en) * 2022-02-23 2023-08-30 Degould Ltd Measuring station

Similar Documents

Publication Publication Date Title
CN102077052B (en) Vision system for scan planning of ultrasonic inspection
EP3621867B1 (en) System and method for mapping a railway track
Li et al. NRLI-UAV: Non-rigid registration of sequential raw laser scans and images for low-cost UAV LiDAR point cloud quality improvement
EP2625105B1 (en) Automated visual inspection system
EP1995553B1 (en) System and method for identifying a feature of a workpiece
KR101489030B1 (en) Accurate Image Acqusition for structured-light System For Optical Shape And Positional Measurements
US20160195390A1 (en) Inspecting components using mobile robotic inspection systems
US20140336928A1 (en) System and Method of Automated Civil Infrastructure Metrology for Inspection, Analysis, and Information Modeling
US20140132729A1 (en) Method and apparatus for camera-based 3d flaw tracking system
WO2016095328A1 (en) Vehicle operation fault detection system and method
CN110047111B (en) Parking apron corridor bridge butt joint error measuring method based on stereoscopic vision
CN112348034A (en) Crane defect detection system based on unmanned aerial vehicle image recognition and working method
JP2921660B2 (en) Article shape measuring method and device
JP2008309603A (en) Fluorescent flaw detection method and its device
CN110686600B (en) Measuring method and system based on flight time measurement
EP4063279B1 (en) Automated assessment of aircraft structure damage
Ye et al. Integration of multiple sensors for noncontact rail profile measurement and inspection
CN110045382A (en) Processing method, device, equipment, server and the system of vehicle damage detection
CA3023398C (en) Automated airfield ground lighting inspection system
TWI632347B (en) Method for integrating three-dimensional image and laser scanning ranging
WO2016077653A1 (en) Novel systems and methods for processing an object
KR20200143161A (en) Safety Inspection Method of Structure Using Drone with Simultaneous Localization and Mapping
KR102252174B1 (en) Safety Inspection Vehicle of Structure Using Mobile Mapping System
KR20150006279A (en) A device and method for hybrid type visual docking guidance system
US20230375709A1 (en) Detection device, determination method, and non-transitory computer-readable medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15858843

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15858843

Country of ref document: EP

Kind code of ref document: A1