WO1999060335A1 - Survey apparatus - Google Patents

Survey apparatus Download PDF

Info

Publication number
WO1999060335A1
WO1999060335A1 PCT/GB1999/001361 GB9901361W WO9960335A1 WO 1999060335 A1 WO1999060335 A1 WO 1999060335A1 GB 9901361 W GB9901361 W GB 9901361W WO 9960335 A1 WO9960335 A1 WO 9960335A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
survey apparatus
range finder
target
angles
Prior art date
Application number
PCT/GB1999/001361
Other languages
French (fr)
Inventor
Stephen Leslie Ball
Original Assignee
Measurement Devices Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Measurement Devices Limited filed Critical Measurement Devices Limited
Priority to AU39374/99A priority Critical patent/AU3937499A/en
Priority to EP99922262A priority patent/EP1078221A1/en
Publication of WO1999060335A1 publication Critical patent/WO1999060335A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • G01C15/002Active optical surveying means

Definitions

  • the present invention relates to a survey apparatus and method.
  • Conventional survey equipment typically measures the distance, bearing and inclination angle to a target (such as a tree, electricity pylon or the like) or a target area, with reference to the position of a user. Such conventional equipment does not allow the user to produce an image of the target which can be used to measure heights and distances between objects within the target area.
  • a survey apparatus comprising a range finder, a camera, a processor capable of processing image and range signals, wherein the camera facilitates aiming of the range finder.
  • a method of measuring the range to a target comprising the steps of providing a camera to view a target area; providing a range finder; using the camera to produce an image of the target area ; selecting the target within the target area; generating horizontal and vertical angles between a reference point and the target; and moving the range finder, if required, through the generated horizontal and vertical angles to measure the range to the targe .
  • the camera is preferably a video camera, and more preferably a digital video camera.
  • the camera may comprise a charge-coupled device (CCD) video camera.
  • the camera may comprise a digital image camera.
  • the apparatus typically includes a display device to allow a user to view a target area using the camera.
  • the display device typically comprise a VGA monitor.
  • the display device may comprise a VGA eyepiece monitor, such as a liquid- crystal display (LCD) or flat panel display. This offers the advantage that an image of the target may be viewed by the user to ensure that the correct target has been selected.
  • the survey apparatus may be operated remotely using the camera to view the target area.
  • the processor typically comprises a computer.
  • the range finder is typically a laser range finder.
  • the laser range finder is bore-sighted with the camera. This, in conjunction with the monitor used to identify the target area, offers the advantage that the user can be sure that the target area he has selected will be captured by the camera, and that the target area can be viewed remotely of the apparatus .
  • any subsequent calculations made by the image processor do not require an offset between the camera and the range finder to be considered.
  • the apparatus typically calculates the range to specified points and incorporates such distance measurements into the image displayed on a screen.
  • the apparatus preferably includes a pan and tilt unit for panning and tilting of the range finder and/or camera.
  • the pan and tilt unit typically comprises a first motor for panning of the range finder and/or camera, and a second motor for tilting of the range finder and/or camera.
  • the pan and tilt unit typically includes first and second digital encoders for measuring the angles of pan and tilt.
  • the first and second motors are typically controlled by the processor.
  • the outputs of the first and second encoders is typically fed to the processor. This provides a feedback loop wherein the motors are operated to pan and tilt the range finder and/or camera through the generated horizontal and vertical angles.
  • the encoders may then be used to check the angles to ensure that the range finder and/or camera were panned and tilted through the correct angles.
  • the image is preferably digitised, wherein the image comprises a plurality of pixels.
  • the reference point is typically a pixel within the target area, and may be a centre point of the target area or one of the corners.
  • the target is typically selected by selecting a pixel within the target, using, for example, a mouse pointer. This produces x and y coordinates for the target pixel .
  • the survey apparatus includes a compass and an inclinometer and/or gyroscope.
  • the compass is typically a digital fluxgate compass. These allow the bearing and angle of inclination to the target to be measured.
  • the signals from the compass, inclinometer and/or gyroscope are preferably digitised to provide data to the processor.
  • the bearing and/or angle of inclination to the target can be displayed on the screen.
  • the survey apparatus further includes a position fixing system for identifying the geographical position of the apparatus.
  • the position fixing system is preferably a Global Positioning System (GPS) which typically includes a Differential Global Positioning System (DGPS) .
  • GPS Global Positioning System
  • DGPS Differential Global Positioning System
  • the GPS/DGPS typically facilitates the time of the survey to be recorded.
  • the signal from the GPS is typically digitised to provide data to the processor.
  • the survey apparatus is typically mounted on a mounting device.
  • the mounting device typically comprises a tripod stand.
  • the apparatus can optionally be mounted on an elevating platform, telescopic elevating tube, telescopic arm, robotic arm or the like. This provides the apparatus with a larger viewing area.
  • the elevating platform or the like is typically capable of 360° rotation. This provides a complete viewing range.
  • the apparatus allows data gathering from within a vehicle to construct a digital terrain model of the terrain surrounding the vehicle.
  • the method typically comprises any one, some or all of the further steps of obtaining a focal length of the camera; obtaining a field of view of the camera; calculating the principal distance of the camera; obtaining the horizontal offset and vertical offset between an axis of the camera and an axis of the laser; calculating the horizontal and vertical offsets in terms of pixels; calculating the difference between the horizontal and vertical offsets in terms of pixel and the x and y coordinates of the target pixel; and calculating the horizontal and vertical angles.
  • the method typically includes one, some or all of the further steps of instructing the pan and tilt unit to pan and tilt the range finder and/or camera through the vertical and horizontal angles; measuring the horizontal and vertical angles using the encoders; verifying that the angles through which the range finder and/or camera are moved is correct ; obtaining horizontal and/or vertical correction angles by subtracting the measured horizontal and vertical angles from the calculated horizontal and vertical angles; adjusting the pan and tilt of the range finder and/or camera if necessary; and firing the range finder to obtain the range to the target.
  • FIG. 1 is a schematic representation of a image capture and laser transmitter and receiver unit in accordance with, and for use with, the present invention
  • Fig. 2 shows schematically a first embodiment of survey apparatus
  • Fig. 3 shows an exploded view of the survey apparatus of Fig. 2 in more detail
  • Fig. 4 shows a simplified schematic illustration of a digital encoder
  • Fig. 5 schematically shows the survey apparatus of Figs 2 and 3 in use
  • Fig. 6 is a schematic representation of the display produced on a computer screen of a freeze frame image produced by a digital camera
  • Fig. 7 is a simplified schematic diagram of inside a digital camera
  • Fig. 8 is a simplified diagram illustrating how a principal distance (PD) may be calculated
  • PD principal distance
  • FIG. 9 is a simplified diagram illustrating the offset between the laser and the camera in use;
  • Fig. 10 is a schematic representation illustrating a horizontal offset H offget outwith the camera;
  • Fig. 11 is a schematic representation illustrating a horizontal distance l x in terms of pixels, corresponding to H offgeC , within the camera;
  • Fig. 12 is a simplified diagram of a freeze frame image showing an object;
  • Fig. 13 is a schematic representation illustrating the relationship between a horizontal distance d x , a principal distance PD and an angle ⁇ ;
  • Fig. 14 is a schematic representation of a screen image of a target overlayed with range, bearing and inclination information;
  • FIG. 15a is a schematic representation of a vehicle provided with an elevating arm and survey apparatus showing the position of the apparatus when the vehicle is moving;
  • Fig. 15b is a schematic representation of the vehicle of Fig. 15a with the apparatus deployed on the arm;
  • Fig. 15c is a schematic representation of the vehicle of Figs 15a and 15b on a slope with the apparatus deployed on the arm;
  • Figs 16a and 16b are respective rear and side views of the survey apparatus deployed on the arm; Fig.
  • FIG. 17 is an exemplary screen shot of an area which has been surveyed using the survey apparatus;
  • Figs 18a and 18b are respective side and plan elevations of the vehicle of Figs 15a to 15c illustrating the survey apparatus being used to profile the ground in front of the vehicle;
  • Figs 19a and 19b are side and plan views of the profile of the ground in front of the vehicle which is displayed for a user of the apparatus;
  • Fig. 20 illustrates a head-up display used by the driver of the vehicle, the display being generated by the survey apparatus;
  • Fig. 21 illustrates calculating the height difference between two points A and B using the survey apparatus;
  • Fig. 22 illustrates calculating the height and distance between two points A and B using the survey apparatus; and
  • Fig. 23 illustrates using the survey apparatus to profile a surface.
  • Fig. 1 shows a schematic representation of an image capture and laser transmitter and receiver unit 10 for use with the present invention.
  • Unit 10 includes a laser 12 (which forms part of a laser range finder) which generates a beam of laser light 14.
  • the laser 12 is typically an invisible, eyesafe, gallium arsenide (GaAs) diode laser which emits a beam typically in the infra-red (IR) spectrum.
  • the laser 12 is typically externally triggered and is designed to measure up to 1000 metres or more to reflective and non-reflective targets. Any particular type of laser 12 may be used and the present invention is not limited to the particular embodiment shown.
  • the beam 14 is reflected by a part-silvered prism 16 in a first direction substantially perpendicular to the direction of the initial beam 14, thereby creating a transmit beam 18.
  • the transmit beam 18 enters a series of transmitter optics 20 which collimates the transmit beam 18 into a target beam 22.
  • the target beam 22 is reflected by a target (schematically shown in Fig. 1 at 24) and is returned as a reflected beam 26.
  • the reflected beam 26 is collected by a series of receiver optics 28 and directs it to a laser light detector 30.
  • the axes of the transmit and receiver optics 20, 28 are calibrated to be coincident at infinity.
  • Signals from the detector 30 are sent to a processor (not shown) which calculates the distance from the apparatus 10 to the target 24 using a time-of-flight principle.
  • a processor not shown which calculates the distance from the apparatus 10 to the target 24 using a time-of-flight principle.
  • CMOS complementary metal -oxide silicon
  • the chip generally includes all the necessary drive circuitry for the camera 32. It should be noted that the camera need not be bore- sighted with the laser. In this case, the transmit laser beam 22 will be offset in the x and/or y directions from the centre of the picture taken by the camera 32. The offsets can be calculated and the survey apparatus calibrated (using software) to take into account the offsets, as will be described.
  • the transmit optics 20 serve a dual purpose by also acting as a lens for the camera 32.
  • light which enters the transmit optics 20 is collimated and directed to the camera 32 (shown schematically at 34) thereby producing an image of the target 24 at the camera 32.
  • the image which the camera 32 receives is digitised and sent to a processor (not shown) . It should be noted that a separate lens may be used for the camera 32 if required.
  • Fig. 2 shows schematically a first embodiment of survey apparatus 100 mounted for movement in x and y directions
  • Fig. 3 shows an exploded view of the survey apparatus 100 of Fig. 2 in more detail.
  • the image capture and laser transmitter and receiver unit 10 (Fig. 1) is typically mounted within a casing 50.
  • the casing 50 is typically mounted to a U-shaped yoke 52, yoke 52 being coupled to a vertical shaft 54.
  • Shaft 54 is rotatably mounted to facilitate rotational movement (indicated by arrow 56 in Fig. 2) of the casing 50 in a horizontal plane (indicated by axis 58) which is the x-direction.
  • the rotational movement of the shaft 54 (and thus the yoke 52 and casing 50) is controlled by a motor 60 coupled to the shaft 54, typically via a gearbox (not shown in Fig. 2) .
  • the operation of the motor 60 is controlled by the computer.
  • the angle of rotation of the casing 50 in the horizontal plane is measured accurately by a first digital encoder 62, attached to the shaft 54 in a known manner, which measures the angular displacement of the casing 50 (and thus the transmit laser beam 22) in the x-direction.
  • the yoke 52 allows the casing 50 (and thus the transmit laser beam 22) to be displaced in the y- direction as indicated by arrow 64.
  • the casing 50 is mounted to the yoke 52 via a horizontal shaft 66.
  • Shaft 66 is rotatably mounted to facilitate rotational movement (indicated by arrow 64 in Fig. 2) of the casing 50 in a vertical plane (indicated by axis 68) which is the y-direction.
  • the rotational movement of the shaft 66 (and thus the yoke 52 and casing 50) is controlled by a motor 68 coupled to the shaft 56, typically via a gearbox (not shown in Fig. 2) .
  • the operation of the motor 66 is controlled by the computer.
  • the angle of rotation of the casing 50 in the vertical plane is measured accurately by a second digital encoder 70, attached to shaft 66 in a known manner, which measures the angular displacement of the casing 50 (and thus the transmit laser beam 22) in the y-direction.
  • the motors 60, 68 provide for panning and tilting of the casing 50.
  • the output of the first and second encoders 62, 70 is electrically coupled to the computer to provide a feedback loop.
  • the feedback loop is required because the motors 60, 68 are typically coupled to the shafts 54, 66 via respective gearboxes and are thus not in direct contact with the shafts 54, 66.
  • the embodiment of the image capture and laser transmitter and receiver unit 10 shown in Fig. 2 is slightly different from that illustrated in Fig. 1.
  • the camera within unit 10 is not bore-sighted with the laser, and thus casing 50 is provided with a camera lens 72, a laser transmitter lens 74 and a laser receiver lens 76.
  • the laser transmitter lens 74 and the camera lens 72 may be integrated into a single lens as illustrated in Fig. 1.
  • the camera lens 72, laser transmitter lens 74 and laser receiver lens 76 would be co-axial. This could be achieved in practice by mechanically adjusting the lenses 72, 74, 76 to make them co-axial. However, this is a time consuming process and the offsets between the lenses can be calculated and the survey apparatus can be calibrated to take these offsets into account, as will be described. This calibration is generally simpler and quicker than mechanically aligning the lenses 72, 74, 76.
  • FIG. 3 there is shown in more detail the apparatus of Fig. 2. It should be noted that the casing 50 which houses the image capture and laser transmitter and receiver unit 10 is not provided with a separate camera lens (as in Fig. 2) . It should also be noted that the casing 50 in Fig. 3 is mounted to facilitate rotational movement in the x-direction, but can be manually tilted in the y-direction.
  • the casing 50 is mounted to the U-shaped yoke 52.
  • the yoke 52 is coupled to the shaft 54 using any conventional means such as screws 80.
  • the shaft 54 is driven by the stepper motor 60 via a worm/wheel drive gearbox 82.
  • the digital encoder 62 is provided underneath a plate 84 through which the shaft 54 passes and to which the gearbox/motor assembly is attached.
  • Plate 84 also includes a rotary gear assembly 86 which is driven by the motor 60 via the worm gearbox 82 to facilitate rotational movement of the shaft 54.
  • the motor, gearbox and shaft assembly is mounted within an aluminium casing 86, the casing 86 also having a rack 88 mounted therein.
  • the rack 88 contains the necessary electronic circuitry for driving and controlling the operation of the survey apparatus, and includes a stepper motor driver board 90, a laser control board 92 and an interface board 94.
  • the first and second digital encoders 62, 70 may be of any conventional type, such as Moir Fringe, barcode or mask. Moir fringe type encoders are typically used as they are more accurate.
  • Fig. 4 shows a simplified schematic illustration of a digital encoder, generally designated 110.
  • Encoder 110 typically comprises a casing 112 in which a disc 114 is rotatably mounted.
  • the disc 114 is provided with a pattern and is typically at least partially translucent. The type of pattern defined on the disc 114 determines the type of encoder .
  • a light emitting diode (LED) 116 is suspended above the disc 114 and emits a light beam (typically collimated by a lens (not shown) which shines through the disc 114.
  • the light emitted by the LED 116 is detected by a detector, typically a cell array 118.
  • a detector typically a cell array 118.
  • These types of encoders usually have two output channels (only one shown in Fig. 4) and the phase relationship between the two signals can be used to determine the direction of rotation of the disc 114.
  • the encoder 110 produces a pulse output per unit of revolution.
  • the pattern on the disc 114 causes electrical pulses to be generated by the cell array 118 in response to the pattern on the disc 114.
  • These pulses can be counted and, given that one pulse is proportional to a certain degree of rotation, the angular rotation of the disc 114 and thus the shaft 54 can be calculated.
  • Fig. 5 shows the survey apparatus 100 (schematically represented in Fig. 5 but shown more clearly in Figs 2 and 3) in use.
  • the apparatus 100 is controlled and operated using software installed on the computer (shown schematically at 120) via a cable 122, telemetry system or other remote or hardwired control.
  • An image of the target is displayed on the computer screen using the camera 32 (Fig. 1) and is schematically shown as image 124 in Fig. 5.
  • the user of the apparatus 100 instructs the camera 32 (included as part of the apparatus 100) to take a freeze frame image of the target area.
  • the freeze frame image 124 is a digital image made up of a plurality of pixels and
  • Fig. 6 is a schematic representation of the display produced on the computer screen of the freeze frame image 124.
  • the image 124 is typically divided into an array of pixels, with the image containing, for example, 200 by 200 pixels in the array.
  • Each pixel within the array has an x and y coordinate associated with it using, for example, the centre C of the picture as a reference point.
  • each pixel within the digital image can be individually addressed using these x and y co-ordinates.
  • the individual addresses for each pixel allow the user to select a particular object (for example a tree 126) within the digital image 124.
  • the tree 126 can be selected using a mouse pointer for example, where the mouse pointer is moved around the pixels of the digital image by movement of a conventional mouse provided with the computer in a known manner.
  • the x and y coordinates of each pixel may be displayed on the screen ad the mouse pointer is moved around the image . Clicking the mouse button with the pointer on the tree 126 selects a particular pixel 128 within the array which is identified by ts x and y coordinates.
  • the computer is then used to calculate the horizontal angle H A and the vertical angle V A (Fig. 6) .
  • the horizontal angle H A and the vertical angle V A are the relative angles between the centre point C of the image and the pixel 128, as schematically shown m Fig. 6.
  • Fig. 7 is a simplified schematic diagram of inside the camera 32 which shows the camera lens 72 and a charge-coupled device (CCD) array 130.
  • the camera 32 is typically a zoom camera which therefore has a number of focal lengths which vary as the lens 72 is moved towards and away from the CCD array 130.
  • the angles of horizontal and vertical views, or the field of view in the horizontal and vertical direction ⁇ H , ⁇ v ( ⁇ v not shown in Fig. 7) can be calibrated and calculated at different focal lengths of the camera 32.
  • the CCD array 130 is square, and thus the field of view in the horizontal and vertical directions ⁇ H , ⁇ v will be the same, and thus only the field of view in the horizontal direction ⁇ H will be considered.
  • the methodology described below considers one zoom position only.
  • the principal distance PD is defined as the distance from the plane of the lens 72 to the image plane (ie the plane of the CCD array 130) .
  • the principal distance PD can be converted into a distance in pixels. For example, if the field of view in the horizontal and vertical angles ⁇ H , ⁇ v is, for example 10°, and the image contains 200 by 200 pixels, then moving one twentieth of a degree in the x or y direction is the equivalent of moving one pixel in the x or y direction.
  • the camera 32 When initially using the apparatus 100, the camera 32 is used to take a calibration freeze frame image and the laser 12 is activated to return the range R to the centre point C of the image.
  • the laser axis is typically offset from the camera axis.
  • the horizontal and vertical offsets between the laser axis and the camera axis when the freeze frame image is taken are defined as H offset and V offse!; and are known. Knowing the range R and the horizontal and vertical offsets H of£set , V offset allows the offset horizontal and vertical distances l x and l y in terms of pixels to be calculated.
  • the centre point C of the image 124 taken by the camera 32 and the laser spot 132 where the transmit laser beam 22 hits the target area is typically offset by the horizontal and vertical distances l x and l y .
  • Fig. 10 is a schematic representation illustrating the horizontal offset H offset outwith the camera 32
  • Fig. 11 is a schematic representation illustrating the horizontal distance l x in terms of pixels, corresponding to H offset , within the camera 32. Referring to Figs 10 and 11 and using basic trigonometry,
  • the computer must calculate the horizontal and vertical angles H A , H v through which the casing 50 and thus the laser beam 22 must be moved in order to target the object.
  • the user selects the particular pixel (relating to the object of interest) within the image using a mouse pointer.
  • the selected object is represented by pixel A which has coordinates (x, y)
  • the laser spot 132 has coordinates (l x , 1 ) calculated using the previous method.
  • the coordinates (x, y) of point A are already known using the coordinates of the pixel array of the image.
  • V A inverse tan (d y /PD) .
  • the computer 120 instructs the motor 60 to pan through an angle of H A and simultaneously instructs the motor 68 to tilt through an angle of V A .
  • the transmit laser beam 22 is directed at the object A selected by the user to determine the range to it.
  • the motors 60, 68 are not directly coupled to the shafts 54, 66 (but via respective gearboxes) and thus can have errors which results in the laser beam 22 not being directed precisely at the object A.
  • the encoders 62, 70 can be used to measure more precisely the angles H A and V A through which the casing 50 was panned and tilted. If there is a difference between the measured angles H A and V A and the angles which were calculated as above, the computer can correct for this and can pan the casing 50 through an angle H AC which is the difference between the calculated angle H A and the measured angle H A , and similarly tilt the casing 50 through an angle V AC which is the difference between the calculated angle V A and the measured angle V A .
  • the process can then be repeated by using the encoders 62, 70 to check that the casing 50 has been panned and tilted through the angles H AC and V AC . If there is a difference again, then the process can be repeated to further correct for the errors introduced. This iteration process can be continued until the output from the decoders 62, 70 corresponds to the correct angles H A and V A . The laser 12 is then fired to give the range to the object A.
  • the user may then select another object within the image 124 which is of interest and use the above process to determine the range to that particular object. It should be noted however, that the process to determine the distances l x and 1 need not be repeated as these distances will be constants.
  • the apparatus 100 can optionally include a Global Positioning System (GPS) (not shown) .
  • GPS Global Positioning System
  • the GPS is a satellite navigation system which provides a three- dimensional position of the GPS receiver (in this case mounted as part of the survey apparatus 100) and thus the position of the survey apparatus 100.
  • the GPS is used to calculate the position of the apparatus 100 anywhere in the world to within approximately + 25 metres.
  • the GPS calculates the position of the apparatus 100 locally using radio/satellite broadcasts which send differential correction signals to ⁇ 1 metre.
  • the GPS can also be used to record the time of all measured data to 1 microsecond.
  • the apparatus 100 may further include an inclinometer (not shown) and a fluxgate compass (not shown) , both of which would be mounted within the casing 50.
  • the fluxgate compass generates a signal which gives a bearing to the target and the inclinometer generates a signal which gives the incline angle to the target.
  • These signals are preferably digitised so that they are in a machine-readable form for direct manipulation by the computer 120.
  • the survey apparatus may also be used to determine the position of objects, such as electricity pylons, buildings, trees or other man-made or natural structures.
  • the GPS system can be used to determine the position of the apparatus 100 anywhere in the world, which can be recorded.
  • the fluxgate compass within the casing 50 measures the bearing to the target, which can be used to determine the position of the target using the reading from the GPS system and the reading from the fluxgate compass .
  • the encoders 62, 70 may be used to determine the bearing to the target instead of the fluxgate compass. In this case, if the encoder is given an absolute reference, such as the bearing to an electricity tower or other prominent landmark which is either known or can be calculated, then the angle relative to the reference bearing can be calculated using the outputs from the encoders 62, 70, thus giving the bearing to the target.
  • an absolute reference such as the bearing to an electricity tower or other prominent landmark which is either known or can be calculated
  • the position of the apparatus and the calculated position of the target could be overlayed on a map displayed on the computer screen so that the accuracy of the map can be checked. This would also allow more accurate maps to be drawn.
  • the survey apparatus 100 of the present invention is advantageously operated remotely. As the apparatus 100 is computer-controlled, remote operation of the system can be achieved via the Internet, a telemetry link or a phone line for example. The survey apparatus 100 is particularly suited to applications where surveying is required in hazardous and/or hostile environments.
  • the screen image may include a sighting graticule 150 which allows the user to select the target with increased accuracy.
  • the orientation of the apparatus 100 can be moved using any particular control means associated with the computer such as a mouse, joystick or the like.
  • the apparatus 100 may be moved by the user clicking on a particular target within the image on the screen using a mouse for example.
  • the camera 32 will display an image on the screen which the user can use to determine the target area.
  • the apparatus 100 will be activated by pressing a key, clicking a mouse button or by any other conventional means, and the camera 32 will take the freeze frame which will be displayed on the computer screen.
  • the user can then select which target he wishes to range too within the picture using the mouse pointer. This will give the two-dimensional x, y pixel coordinates for the selected object.
  • the computer 120 may then calculate the horizontal and vertical angles H A , V A as described above.
  • the computer 120 then instructs the motors 60, 68 to pan and tilt through their respective angles until the laser transmit beam 22 is pointing at the object of interest. This may require the iteration process described above to ensure that the laser beam 22 is accurately aligned with the target object.
  • the laser 12 will be activated to determine the range R to a particular object. Once the range is known, the screen image can be overlayed with the range and the horizontal and vertical angles H A , V A , as indicated generally by 152 in Fig. 14. This information can then be saved for future reference and/or analysis.
  • the apparatus 100 is particularly suited to applications in hostile and/or hazardous environments.
  • the apparatus 100 can be operated remotely and thus ensures that the user can survey an area of interest from a relatively safe, remote environment.
  • the apparatus 100 can be mounted on top of a tripod stand, mounted on a vehicle on a telescopic mast, or on an elevated platform for greater visibility.
  • the apparatus 100 can be used to measure the range to most types of surfaces including earth, coal, rock and vegetation at distance in excess of 1 kilometre (km) .
  • a vehicle 160 such as a tank
  • the apparatus 100 may be completely retracted when the vehicle 160 is in motion, and may be stored behind an armoured shield 164.
  • the casing 50 of the apparatus 100 would tilt downwards to a horizontal attitude and the telescopic arm 162 would extend so that the apparatus 100 was substantially protected by the armoured shield 164.
  • the vehicle is stopped and the apparatus 100 deployed on the telescopic arm 162 by reversing the procedure described above, as illustrated in Fig. 15b.
  • the telescopic arm is preferably mounted on a rotation joint 166 so that the apparatus 100 can be rotated through 360° as indicated by arrow 168 in the enlarged portion of Fig. 15b.
  • a motor 170 is coupled to the rotation joint 166 to facilitate rotation of the joint 166.
  • the apparatus 100 can typically be raised to a height of approximately 15 metres or more, depending upon the construction of the arm 162.
  • Figs 15a and 15b can accommodate large angles of roll and pitch of the vehicle, such as that shown in Fig. 15c.
  • the vehicle 160 is stationary on a slope 172 and has been rolled through an angle indicated by arrow 174 in Fig. 15c.
  • the user or the computer can correct for the angle of roll 174 by moving the arm 162 until the inclinometer indicates that the apparatus 100 is level.
  • a level 178 (Figs 16a, 16b) may be provided on the base of the apparatus 100 if required.
  • Figs 16a and 16b are front and side elevations of the apparatus 100 mounted on the arm 162.
  • the arm 162 can be rotated through 360° as indicated by arrow 176 in Fig. 16a.
  • the apparatus 100 is mounted on a pan and tilt head 180 to facilitate panning and tilting of the apparatus 100.
  • Servo motors within the pan and tilt head 180 pan and tilt the head 180 into the plane of roll and pitch of the vehicle 160 (Fig. 15c) . Thereafter, the motors 60, 68 of the apparatus 100 pan and tilt the apparatus 100 until it is level, using the level indicator 178 as a guide.
  • Further electronic levels (not shown) within the apparatus 100 can measure any residual dislevelement and this can be corrected for in the software before any measurements are taken.
  • a particular application of the apparatus 100 deployed on a vehicle 160 would be in a military operation.
  • the apparatus 100 can be deployed remotely on the arm 162 and used to survey the area surrounding the vehicle 160.
  • the computer 120 could be provided with a ground modelling software package wherein the user selects a number of key targets within the area using the method described above, and finds the range and bearing to, height of and global position of (if required) these targets.
  • the software package will then plot these points, including any heights which the GPS 182 (Figs 16a, 16b) can generate, and in-fill or morph the remaining background to produce an image of the terrain, such as that shown in Fig. 17.
  • Fig. 17 shows an exemplary terrain which has been surveyed, the terrain including a river 190, the river 190 being in a valley with sides 192, 194 raising upwardly from the river 190.
  • design templates of equipment carried by the vehicle 160 can be overlayed over the image to assess which type of equipment is required to cross the obstacle, such as the river 190.
  • the surveying operation can be done discretely and in a very short time compared with conventional survey techniques. Such conventional techniques would typically be to deploy a number of soldiers to survey the area manually and report back. However, with the apparatus 100 deployed on the vehicle 160 the survey can be done quicker, more accurately and more safely, without substantial risk to human life.
  • the apparatus 100 can be used to check the profile of the ground in front of the vehicle 160.
  • the profile of the ground could be shown in profile and plan views as illustrated in Figs 19a, and 19b respectively.
  • the software on the computer 120 could be used to generate a head-up video display to which the driver of the vehicle 160 could refer.
  • Fig. 20 illustrates an example of the type of head-up display which could be generated.
  • the heading of the tank (measured by the fluxgate compass) is displayed, with the range to and height of the ground (and any obstructions) in front of the vehicle also being displayed.
  • the height displayed could be the height relative to the vehicles' position, or could be the absolute height obtained from the GPS 182.
  • Figs 21 to 23 illustrate three further applications of the apparatus 100.
  • Fig. 21 illustrates how to calculate the height between two points A and B (indicated by crosses in Fig. 21) .
  • the user will select the points A and B and then measure the range to them using the method described above. This will give three-dimensional coordinates for each point A and B. If it is assumed that the range to each point is approximately equal (which can be checked using the measured ranges) and that the x co-ordinates for each point are approximately equal (this can be done using the display of x, y and z co-ordinates displayed on the screen) , then the height from A to B is given by subtracting their respective y coordinates. This can then be displayed within a separate window within the screen, for example.
  • Fig. 22 illustrates the technique used to measure the height and distance between two points A and B.
  • the range to A and B are first measured using the apparatus 100 as described above.
  • the slope from A to B, the horizontal difference between A and B and the gradient of A to B are then calculated, the results being overlayed on the screen.
  • Fig. 23 illustrates how a rock face or the like may be profiled. Range measurements are taken at intervals along the profile (indicated by crosses in Fig. 23) . The height of each measurement will be calculated from either the inclinometer reading or can be determined using the GPS 182. Thus, a rock profile may be produced, as shown in Fig. 23.
  • a survey apparatus and method which provides for remote control operation using a video camera to relay images back to a host computer in real-time.
  • the image on the host computer allows the user to select particular objects of interest within the surveyed area and measure the range to these objects.
  • the apparatus can also be used to determine rock profiles, heights between two points, the position of certain objects and the like.

Abstract

A survey apparatus and method is provided which allows a user of the apparatus to view a target area on a screen using a camera. The image on the screen can be captured and a target within the screen selected to measure the distance or range to the target using a laser range finder.

Description

"Survey Apparatus"
The present invention relates to a survey apparatus and method.
Conventional survey equipment typically measures the distance, bearing and inclination angle to a target (such as a tree, electricity pylon or the like) or a target area, with reference to the position of a user. Such conventional equipment does not allow the user to produce an image of the target which can be used to measure heights and distances between objects within the target area.
In addition, conventional sighting devices which are used to select a target to be surveyed often result in false surveys being made as the target is often not correctly identified.
According to a first aspect of the present invention there is provided a survey apparatus comprising a range finder, a camera, a processor capable of processing image and range signals, wherein the camera facilitates aiming of the range finder. According to a second aspect of the present invention there is provided a method of measuring the range to a target, the method comprising the steps of providing a camera to view a target area; providing a range finder; using the camera to produce an image of the target area ; selecting the target within the target area; generating horizontal and vertical angles between a reference point and the target; and moving the range finder, if required, through the generated horizontal and vertical angles to measure the range to the targe .
The camera is preferably a video camera, and more preferably a digital video camera. The camera may comprise a charge-coupled device (CCD) video camera. Alternatively, the camera may comprise a digital image camera. The apparatus typically includes a display device to allow a user to view a target area using the camera. The display device typically comprise a VGA monitor. Alternatively, the display device may comprise a VGA eyepiece monitor, such as a liquid- crystal display (LCD) or flat panel display. This offers the advantage that an image of the target may be viewed by the user to ensure that the correct target has been selected. Also, the survey apparatus may be operated remotely using the camera to view the target area.
The processor typically comprises a computer.
The range finder is typically a laser range finder. Optionally, the laser range finder is bore-sighted with the camera. This, in conjunction with the monitor used to identify the target area, offers the advantage that the user can be sure that the target area he has selected will be captured by the camera, and that the target area can be viewed remotely of the apparatus . In addition, if the camera is bore-sighted with the range finder, then any subsequent calculations made by the image processor do not require an offset between the camera and the range finder to be considered.
The apparatus typically calculates the range to specified points and incorporates such distance measurements into the image displayed on a screen.
The apparatus preferably includes a pan and tilt unit for panning and tilting of the range finder and/or camera. The pan and tilt unit typically comprises a first motor for panning of the range finder and/or camera, and a second motor for tilting of the range finder and/or camera. The pan and tilt unit typically includes first and second digital encoders for measuring the angles of pan and tilt. The first and second motors are typically controlled by the processor. The outputs of the first and second encoders is typically fed to the processor. This provides a feedback loop wherein the motors are operated to pan and tilt the range finder and/or camera through the generated horizontal and vertical angles. The encoders may then be used to check the angles to ensure that the range finder and/or camera were panned and tilted through the correct angles.
The image is preferably digitised, wherein the image comprises a plurality of pixels. The reference point is typically a pixel within the target area, and may be a centre point of the target area or one of the corners. The target is typically selected by selecting a pixel within the target, using, for example, a mouse pointer. This produces x and y coordinates for the target pixel .
Optionally, the survey apparatus includes a compass and an inclinometer and/or gyroscope. The compass is typically a digital fluxgate compass. These allow the bearing and angle of inclination to the target to be measured. The signals from the compass, inclinometer and/or gyroscope are preferably digitised to provide data to the processor. The bearing and/or angle of inclination to the target can be displayed on the screen.
Optionally, the survey apparatus further includes a position fixing system for identifying the geographical position of the apparatus. The position fixing system is preferably a Global Positioning System (GPS) which typically includes a Differential Global Positioning System (DGPS) . This provides the advantage that the approximate position of the apparatus can be recorded (and thus the position of the target using the measurements from the range finder and compass, where used) . The GPS/DGPS typically facilitates the time of the survey to be recorded. The signal from the GPS is typically digitised to provide data to the processor.
The survey apparatus is typically mounted on a mounting device. The mounting device typically comprises a tripod stand. The apparatus can optionally be mounted on an elevating platform, telescopic elevating tube, telescopic arm, robotic arm or the like. This provides the apparatus with a larger viewing area. The elevating platform or the like is typically capable of 360° rotation. This provides a complete viewing range.
The apparatus allows data gathering from within a vehicle to construct a digital terrain model of the terrain surrounding the vehicle.
The method typically comprises any one, some or all of the further steps of obtaining a focal length of the camera; obtaining a field of view of the camera; calculating the principal distance of the camera; obtaining the horizontal offset and vertical offset between an axis of the camera and an axis of the laser; calculating the horizontal and vertical offsets in terms of pixels; calculating the difference between the horizontal and vertical offsets in terms of pixel and the x and y coordinates of the target pixel; and calculating the horizontal and vertical angles.
Optionally, the method typically includes one, some or all of the further steps of instructing the pan and tilt unit to pan and tilt the range finder and/or camera through the vertical and horizontal angles; measuring the horizontal and vertical angles using the encoders; verifying that the angles through which the range finder and/or camera are moved is correct ; obtaining horizontal and/or vertical correction angles by subtracting the measured horizontal and vertical angles from the calculated horizontal and vertical angles; adjusting the pan and tilt of the range finder and/or camera if necessary; and firing the range finder to obtain the range to the target. Embodiments of the present invention shall now be described, with reference to the accompanying drawings, in which: - Fig. 1 is a schematic representation of a image capture and laser transmitter and receiver unit in accordance with, and for use with, the present invention; Fig. 2 shows schematically a first embodiment of survey apparatus; Fig. 3 shows an exploded view of the survey apparatus of Fig. 2 in more detail; Fig. 4 shows a simplified schematic illustration of a digital encoder; Fig. 5 schematically shows the survey apparatus of Figs 2 and 3 in use; Fig. 6 is a schematic representation of the display produced on a computer screen of a freeze frame image produced by a digital camera; Fig. 7 is a simplified schematic diagram of inside a digital camera; Fig. 8 is a simplified diagram illustrating how a principal distance (PD) may be calculated; Fig. 9 is a simplified diagram illustrating the offset between the laser and the camera in use; Fig. 10 is a schematic representation illustrating a horizontal offset Hoffget outwith the camera; Fig. 11 is a schematic representation illustrating a horizontal distance lx in terms of pixels, corresponding to HoffgeC, within the camera; Fig. 12 is a simplified diagram of a freeze frame image showing an object; Fig. 13 is a schematic representation illustrating the relationship between a horizontal distance dx, a principal distance PD and an angle θ; Fig. 14 is a schematic representation of a screen image of a target overlayed with range, bearing and inclination information; Fig. 15a is a schematic representation of a vehicle provided with an elevating arm and survey apparatus showing the position of the apparatus when the vehicle is moving; Fig. 15b is a schematic representation of the vehicle of Fig. 15a with the apparatus deployed on the arm; Fig. 15c is a schematic representation of the vehicle of Figs 15a and 15b on a slope with the apparatus deployed on the arm; Figs 16a and 16b are respective rear and side views of the survey apparatus deployed on the arm; Fig. 17 is an exemplary screen shot of an area which has been surveyed using the survey apparatus; Figs 18a and 18b are respective side and plan elevations of the vehicle of Figs 15a to 15c illustrating the survey apparatus being used to profile the ground in front of the vehicle; Figs 19a and 19b are side and plan views of the profile of the ground in front of the vehicle which is displayed for a user of the apparatus; Fig. 20 illustrates a head-up display used by the driver of the vehicle, the display being generated by the survey apparatus; Fig. 21 illustrates calculating the height difference between two points A and B using the survey apparatus; Fig. 22 illustrates calculating the height and distance between two points A and B using the survey apparatus; and Fig. 23 illustrates using the survey apparatus to profile a surface.
Referring to the drawings, Fig. 1 shows a schematic representation of an image capture and laser transmitter and receiver unit 10 for use with the present invention. Unit 10 includes a laser 12 (which forms part of a laser range finder) which generates a beam of laser light 14. The laser 12 is typically an invisible, eyesafe, gallium arsenide (GaAs) diode laser which emits a beam typically in the infra-red (IR) spectrum. The laser 12 is typically externally triggered and is designed to measure up to 1000 metres or more to reflective and non-reflective targets. Any particular type of laser 12 may be used and the present invention is not limited to the particular embodiment shown.
The beam 14 is reflected by a part-silvered prism 16 in a first direction substantially perpendicular to the direction of the initial beam 14, thereby creating a transmit beam 18. The transmit beam 18 enters a series of transmitter optics 20 which collimates the transmit beam 18 into a target beam 22. The target beam 22 is reflected by a target (schematically shown in Fig. 1 at 24) and is returned as a reflected beam 26. The reflected beam 26 is collected by a series of receiver optics 28 and directs it to a laser light detector 30. The axes of the transmit and receiver optics 20, 28 are calibrated to be coincident at infinity.
Signals from the detector 30 are sent to a processor (not shown) which calculates the distance from the apparatus 10 to the target 24 using a time-of-flight principle. Thus, by dividing the time taken for the light to reach the target 24 and be reflected back to the detector 30 by two, the distance to the target 24 may be calculated.
Bore-sighted with the laser 12 (using the part-silvered prism 16) is a digital video camera 32. The camera 32 is preferably a complementary metal -oxide silicon (CMOS) camera which is formed on a silicon chip. The chip generally includes all the necessary drive circuitry for the camera 32. It should be noted that the camera need not be bore- sighted with the laser. In this case, the transmit laser beam 22 will be offset in the x and/or y directions from the centre of the picture taken by the camera 32. The offsets can be calculated and the survey apparatus calibrated (using software) to take into account the offsets, as will be described.
The transmit optics 20 serve a dual purpose by also acting as a lens for the camera 32. Thus, light which enters the transmit optics 20 is collimated and directed to the camera 32 (shown schematically at 34) thereby producing an image of the target 24 at the camera 32. The image which the camera 32 receives is digitised and sent to a processor (not shown) . It should be noted that a separate lens may be used for the camera 32 if required.
Referring now to Figs 2 and 3, Fig. 2 shows schematically a first embodiment of survey apparatus 100 mounted for movement in x and y directions, and Fig. 3 shows an exploded view of the survey apparatus 100 of Fig. 2 in more detail.
Referring firstly to Fig. 2, the image capture and laser transmitter and receiver unit 10 (Fig. 1) is typically mounted within a casing 50. The casing 50 is typically mounted to a U-shaped yoke 52, yoke 52 being coupled to a vertical shaft 54. Shaft 54 is rotatably mounted to facilitate rotational movement (indicated by arrow 56 in Fig. 2) of the casing 50 in a horizontal plane (indicated by axis 58) which is the x-direction. The rotational movement of the shaft 54 (and thus the yoke 52 and casing 50) is controlled by a motor 60 coupled to the shaft 54, typically via a gearbox (not shown in Fig. 2) . The operation of the motor 60 is controlled by the computer.
The angle of rotation of the casing 50 in the horizontal plane (ie the x-direction) is measured accurately by a first digital encoder 62, attached to the shaft 54 in a known manner, which measures the angular displacement of the casing 50 (and thus the transmit laser beam 22) in the x-direction.
Similarly, the yoke 52 allows the casing 50 (and thus the transmit laser beam 22) to be displaced in the y- direction as indicated by arrow 64. The casing 50 is mounted to the yoke 52 via a horizontal shaft 66. Shaft 66 is rotatably mounted to facilitate rotational movement (indicated by arrow 64 in Fig. 2) of the casing 50 in a vertical plane (indicated by axis 68) which is the y-direction. The rotational movement of the shaft 66 (and thus the yoke 52 and casing 50) is controlled by a motor 68 coupled to the shaft 56, typically via a gearbox (not shown in Fig. 2) . The operation of the motor 66 is controlled by the computer.
The angle of rotation of the casing 50 in the vertical plane (ie the y-direction) is measured accurately by a second digital encoder 70, attached to shaft 66 in a known manner, which measures the angular displacement of the casing 50 (and thus the transmit laser beam 22) in the y-direction. Thus, the motors 60, 68 provide for panning and tilting of the casing 50. The output of the first and second encoders 62, 70 is electrically coupled to the computer to provide a feedback loop. The feedback loop is required because the motors 60, 68 are typically coupled to the shafts 54, 66 via respective gearboxes and are thus not in direct contact with the shafts 54, 66. This makes the movement of the casing 50 which is effected by operation of the motors 60, 68 less accurate. However, as the encoders 62, 70 are coupled directly to their respective shafts 54, 66 then the panning and tilting of the casing in the x- and y-directions can be measured more accurately, as will be described.
The embodiment of the image capture and laser transmitter and receiver unit 10 shown in Fig. 2 is slightly different from that illustrated in Fig. 1. The camera within unit 10 is not bore-sighted with the laser, and thus casing 50 is provided with a camera lens 72, a laser transmitter lens 74 and a laser receiver lens 76. It should be noted that the laser transmitter lens 74 and the camera lens 72 may be integrated into a single lens as illustrated in Fig. 1. Ideally, the camera lens 72, laser transmitter lens 74 and laser receiver lens 76 would be co-axial. This could be achieved in practice by mechanically adjusting the lenses 72, 74, 76 to make them co-axial. However, this is a time consuming process and the offsets between the lenses can be calculated and the survey apparatus can be calibrated to take these offsets into account, as will be described. This calibration is generally simpler and quicker than mechanically aligning the lenses 72, 74, 76.
Referring to Fig. 3, there is shown in more detail the apparatus of Fig. 2. It should be noted that the casing 50 which houses the image capture and laser transmitter and receiver unit 10 is not provided with a separate camera lens (as in Fig. 2) . It should also be noted that the casing 50 in Fig. 3 is mounted to facilitate rotational movement in the x-direction, but can be manually tilted in the y-direction.
As can be seen more clearly in Fig. 3, the casing 50 is mounted to the U-shaped yoke 52. The yoke 52 is coupled to the shaft 54 using any conventional means such as screws 80. The shaft 54 is driven by the stepper motor 60 via a worm/wheel drive gearbox 82. The digital encoder 62 is provided underneath a plate 84 through which the shaft 54 passes and to which the gearbox/motor assembly is attached. Plate 84 also includes a rotary gear assembly 86 which is driven by the motor 60 via the worm gearbox 82 to facilitate rotational movement of the shaft 54.
The motor, gearbox and shaft assembly is mounted within an aluminium casing 86, the casing 86 also having a rack 88 mounted therein. The rack 88 contains the necessary electronic circuitry for driving and controlling the operation of the survey apparatus, and includes a stepper motor driver board 90, a laser control board 92 and an interface board 94.
The first and second digital encoders 62, 70 may be of any conventional type, such as Moir Fringe, barcode or mask. Moir fringe type encoders are typically used as they are more accurate. Fig. 4 shows a simplified schematic illustration of a digital encoder, generally designated 110. Encoder 110 typically comprises a casing 112 in which a disc 114 is rotatably mounted. The disc 114 is provided with a pattern and is typically at least partially translucent. The type of pattern defined on the disc 114 determines the type of encoder .
A light emitting diode (LED) 116 is suspended above the disc 114 and emits a light beam (typically collimated by a lens (not shown) which shines through the disc 114. The light emitted by the LED 116 is detected by a detector, typically a cell array 118. As the disc 114 rotates (in conjunction with the shaft to which it is coupled) a number of electrical outputs are generated per revolution of the disc 114 by the cell array 118 which detects the light passing through the disc 114 from the LED 116. These types of encoders usually have two output channels (only one shown in Fig. 4) and the phase relationship between the two signals can be used to determine the direction of rotation of the disc 114.
The encoder 110 produces a pulse output per unit of revolution. Thus, as the disc 114 rotates, the pattern on the disc 114 causes electrical pulses to be generated by the cell array 118 in response to the pattern on the disc 114. These pulses can be counted and, given that one pulse is proportional to a certain degree of rotation, the angular rotation of the disc 114 and thus the shaft 54 can be calculated.
Fig. 5 shows the survey apparatus 100 (schematically represented in Fig. 5 but shown more clearly in Figs 2 and 3) in use. The apparatus 100 is controlled and operated using software installed on the computer (shown schematically at 120) via a cable 122, telemetry system or other remote or hardwired control. An image of the target is displayed on the computer screen using the camera 32 (Fig. 1) and is schematically shown as image 124 in Fig. 5. When the image 124 of the target area of interest is viewed on the screen, the user of the apparatus 100 instructs the camera 32 (included as part of the apparatus 100) to take a freeze frame image of the target area. The freeze frame image 124 is a digital image made up of a plurality of pixels and Fig. 6 is a schematic representation of the display produced on the computer screen of the freeze frame image 124. The image 124 is typically divided into an array of pixels, with the image containing, for example, 200 by 200 pixels in the array.
Each pixel within the array has an x and y coordinate associated with it using, for example, the centre C of the picture as a reference point. Thus, each pixel within the digital image can be individually addressed using these x and y co-ordinates.
The individual addresses for each pixel allow the user to select a particular object (for example a tree 126) within the digital image 124. The tree 126 can be selected using a mouse pointer for example, where the mouse pointer is moved around the pixels of the digital image by movement of a conventional mouse provided with the computer in a known manner. The x and y coordinates of each pixel may be displayed on the screen ad the mouse pointer is moved around the image . Clicking the mouse button with the pointer on the tree 126 selects a particular pixel 128 within the array which is identified by ts x and y coordinates.
The computer is then used to calculate the horizontal angle HA and the vertical angle VA (Fig. 6) . The horizontal angle HA and the vertical angle VA are the relative angles between the centre point C of the image and the pixel 128, as schematically shown m Fig. 6.
The methodology for calculating the horizontal angle HA and the vertical angle VA from the pixel x, y coordinates is as follows. Fig. 7 is a simplified schematic diagram of inside the camera 32 which shows the camera lens 72 and a charge-coupled device (CCD) array 130. The camera 32 is typically a zoom camera which therefore has a number of focal lengths which vary as the lens 72 is moved towards and away from the CCD array 130.
Referring to Fig. 7, the angles of horizontal and vertical views, or the field of view in the horizontal and vertical direction ΘH, θvv not shown in Fig. 7) can be calibrated and calculated at different focal lengths of the camera 32. For simplicity, it is assumed that the CCD array 130 is square, and thus the field of view in the horizontal and vertical directions ΘH, θv will be the same, and thus only the field of view in the horizontal direction ΘH will be considered. The methodology described below considers one zoom position only.
Having calculated (or otherwise obtained) the field of view in the horizontal direction ΘH then the principal distance PD (in pixels) can be calculated. The principal distance PD is defined as the distance from the plane of the lens 72 to the image plane (ie the plane of the CCD array 130) .
Referring to Fig. 8, if the image width on the CCD array is defined as HR, then using basic trigonometry tan(θH/2) = HR/ (2PD) . Thus,
PD = HR/2 (tan(θH/2) )
If the distance between each pixel in the image 124 in a certain unit (ie millimetres) is known, then the principal distance PD can be converted into a distance in pixels. For example, if the field of view in the horizontal and vertical angles ΘH, θv is, for example 10°, and the image contains 200 by 200 pixels, then moving one twentieth of a degree in the x or y direction is the equivalent of moving one pixel in the x or y direction.
When initially using the apparatus 100, the camera 32 is used to take a calibration freeze frame image and the laser 12 is activated to return the range R to the centre point C of the image. However, the laser axis is typically offset from the camera axis. The horizontal and vertical offsets between the laser axis and the camera axis when the freeze frame image is taken are defined as Hoffset and Voffse!; and are known. Knowing the range R and the horizontal and vertical offsets Hof£set, Voffset allows the offset horizontal and vertical distances lx and ly in terms of pixels to be calculated. Referring to Fig. 9, the centre point C of the image 124 taken by the camera 32 and the laser spot 132 where the transmit laser beam 22 hits the target area is typically offset by the horizontal and vertical distances lx and ly.
Fig. 10 is a schematic representation illustrating the horizontal offset Hoffset outwith the camera 32, and Fig. 11 is a schematic representation illustrating the horizontal distance lx in terms of pixels, corresponding to Hoffset, within the camera 32. Referring to Figs 10 and 11 and using basic trigonometry,
tan θ = Hof Eβet/R and , lx = PD ( tan θ ) Thus , lx= PD (Ho£fBet/R) and it follows that ly = PD (Vof f Bet/R)
If the range to a certain object within the target area (such as the tree 126 in Fig. 6) is required, then the computer must calculate the horizontal and vertical angles HA, Hv through which the casing 50 and thus the laser beam 22 must be moved in order to target the object.
The user selects the particular pixel (relating to the object of interest) within the image using a mouse pointer. In Fig. 12, the selected object is represented by pixel A which has coordinates (x, y) , and the laser spot 132 has coordinates (lx, 1 ) calculated using the previous method. The coordinates (x, y) of point A are already known using the coordinates of the pixel array of the image.
If the horizontal distance between pixel A and the laser spot 132 is defined as dx, and similarly the vertical distance between pixel A and the laser spot 132 is defined as dy, then
dx = x - lx and dy = y - ly,
and it follows that the horizontal and vertical angles HA, VA can be calculated as
HA = inverse tan (dx/PD)
and VA = inverse tan (dy/PD) .
Referring back to Fig. 2, having calculated the horizontal and vertical angles HA, VA through which the casing 50 must be rotated to measure the range to the object A, the computer 120 instructs the motor 60 to pan through an angle of HA and simultaneously instructs the motor 68 to tilt through an angle of VA. Thus, the transmit laser beam 22 is directed at the object A selected by the user to determine the range to it.
However, the motors 60, 68 are not directly coupled to the shafts 54, 66 (but via respective gearboxes) and thus can have errors which results in the laser beam 22 not being directed precisely at the object A. However, the encoders 62, 70 can be used to measure more precisely the angles HA and VA through which the casing 50 was panned and tilted. If there is a difference between the measured angles HA and VA and the angles which were calculated as above, the computer can correct for this and can pan the casing 50 through an angle HAC which is the difference between the calculated angle HA and the measured angle HA, and similarly tilt the casing 50 through an angle VAC which is the difference between the calculated angle VA and the measured angle VA. The process can then be repeated by using the encoders 62, 70 to check that the casing 50 has been panned and tilted through the angles HAC and VAC. If there is a difference again, then the process can be repeated to further correct for the errors introduced. This iteration process can be continued until the output from the decoders 62, 70 corresponds to the correct angles HA and VA. The laser 12 is then fired to give the range to the object A.
The user may then select another object within the image 124 which is of interest and use the above process to determine the range to that particular object. It should be noted however, that the process to determine the distances lx and 1 need not be repeated as these distances will be constants.
The apparatus 100 can optionally include a Global Positioning System (GPS) (not shown) . The GPS is a satellite navigation system which provides a three- dimensional position of the GPS receiver (in this case mounted as part of the survey apparatus 100) and thus the position of the survey apparatus 100. The GPS is used to calculate the position of the apparatus 100 anywhere in the world to within approximately + 25 metres. The GPS calculates the position of the apparatus 100 locally using radio/satellite broadcasts which send differential correction signals to ± 1 metre. The GPS can also be used to record the time of all measured data to 1 microsecond.
The apparatus 100 may further include an inclinometer (not shown) and a fluxgate compass (not shown) , both of which would be mounted within the casing 50. The fluxgate compass generates a signal which gives a bearing to the target and the inclinometer generates a signal which gives the incline angle to the target. These signals are preferably digitised so that they are in a machine-readable form for direct manipulation by the computer 120.
Thus, in addition to being used to find ranges to specific targets, the survey apparatus may also be used to determine the position of objects, such as electricity pylons, buildings, trees or other man-made or natural structures. The GPS system can be used to determine the position of the apparatus 100 anywhere in the world, which can be recorded. Optionally, the fluxgate compass within the casing 50 measures the bearing to the target, which can be used to determine the position of the target using the reading from the GPS system and the reading from the fluxgate compass .
It should also be noted that the encoders 62, 70 may be used to determine the bearing to the target instead of the fluxgate compass. In this case, if the encoder is given an absolute reference, such as the bearing to an electricity tower or other prominent landmark which is either known or can be calculated, then the angle relative to the reference bearing can be calculated using the outputs from the encoders 62, 70, thus giving the bearing to the target.
In addition, the position of the apparatus and the calculated position of the target could be overlayed on a map displayed on the computer screen so that the accuracy of the map can be checked. This would also allow more accurate maps to be drawn.
Referring to Fig. 14, there is shown an exemplary image printed from the screen of the computer 120. The survey apparatus 100 of the present invention is advantageously operated remotely. As the apparatus 100 is computer-controlled, remote operation of the system can be achieved via the Internet, a telemetry link or a phone line for example. The survey apparatus 100 is particularly suited to applications where surveying is required in hazardous and/or hostile environments.
Thus, as show in Fig. 14, the screen image may include a sighting graticule 150 which allows the user to select the target with increased accuracy. The orientation of the apparatus 100 can be moved using any particular control means associated with the computer such as a mouse, joystick or the like. In particular, the apparatus 100 may be moved by the user clicking on a particular target within the image on the screen using a mouse for example. As the apparatus is moved, the camera 32 will display an image on the screen which the user can use to determine the target area.
Thereafter, the apparatus 100 will be activated by pressing a key, clicking a mouse button or by any other conventional means, and the camera 32 will take the freeze frame which will be displayed on the computer screen. The user can then select which target he wishes to range too within the picture using the mouse pointer. This will give the two-dimensional x, y pixel coordinates for the selected object. The computer 120 may then calculate the horizontal and vertical angles HA, VA as described above. The computer 120 then instructs the motors 60, 68 to pan and tilt through their respective angles until the laser transmit beam 22 is pointing at the object of interest. This may require the iteration process described above to ensure that the laser beam 22 is accurately aligned with the target object. Once the beam 22 is aligned with the object, the laser 12 will be activated to determine the range R to a particular object. Once the range is known, the screen image can be overlayed with the range and the horizontal and vertical angles HA, VA, as indicated generally by 152 in Fig. 14. This information can then be saved for future reference and/or analysis.
The apparatus 100 is particularly suited to applications in hostile and/or hazardous environments. The apparatus 100 can be operated remotely and thus ensures that the user can survey an area of interest from a relatively safe, remote environment.
The apparatus 100 can be mounted on top of a tripod stand, mounted on a vehicle on a telescopic mast, or on an elevated platform for greater visibility. The apparatus 100 can be used to measure the range to most types of surfaces including earth, coal, rock and vegetation at distance in excess of 1 kilometre (km) .
Referring to Figs 15a to c, there is shown a vehicle 160 (such as a tank) which is provided with the apparatus 100 mounted on a telescopic or extendable arm 162. As illustrated in Fig. 15a, the apparatus 100 may be completely retracted when the vehicle 160 is in motion, and may be stored behind an armoured shield 164. The casing 50 of the apparatus 100 would tilt downwards to a horizontal attitude and the telescopic arm 162 would extend so that the apparatus 100 was substantially protected by the armoured shield 164.
When the area to be surveyed is reached, the vehicle is stopped and the apparatus 100 deployed on the telescopic arm 162 by reversing the procedure described above, as illustrated in Fig. 15b. The telescopic arm is preferably mounted on a rotation joint 166 so that the apparatus 100 can be rotated through 360° as indicated by arrow 168 in the enlarged portion of Fig. 15b. A motor 170 is coupled to the rotation joint 166 to facilitate rotation of the joint 166. The apparatus 100 can typically be raised to a height of approximately 15 metres or more, depending upon the construction of the arm 162.
The particular configuration shown in Figs 15a and 15b can accommodate large angles of roll and pitch of the vehicle, such as that shown in Fig. 15c. In Fig. 15c, the vehicle 160 is stationary on a slope 172 and has been rolled through an angle indicated by arrow 174 in Fig. 15c. The user or the computer can correct for the angle of roll 174 by moving the arm 162 until the inclinometer indicates that the apparatus 100 is level. A level 178 (Figs 16a, 16b) may be provided on the base of the apparatus 100 if required.
Figs 16a and 16b are front and side elevations of the apparatus 100 mounted on the arm 162. As can be seen from Figs 16a and 16b, the arm 162 can be rotated through 360° as indicated by arrow 176 in Fig. 16a. The apparatus 100 is mounted on a pan and tilt head 180 to facilitate panning and tilting of the apparatus 100.
Servo motors within the pan and tilt head 180 pan and tilt the head 180 into the plane of roll and pitch of the vehicle 160 (Fig. 15c) . Thereafter, the motors 60, 68 of the apparatus 100 pan and tilt the apparatus 100 until it is level, using the level indicator 178 as a guide.
Further electronic levels (not shown) within the apparatus 100 can measure any residual dislevelement and this can be corrected for in the software before any measurements are taken.
A particular application of the apparatus 100 deployed on a vehicle 160 would be in a military operation. The apparatus 100 can be deployed remotely on the arm 162 and used to survey the area surrounding the vehicle 160. The computer 120 could be provided with a ground modelling software package wherein the user selects a number of key targets within the area using the method described above, and finds the range and bearing to, height of and global position of (if required) these targets. The software package will then plot these points, including any heights which the GPS 182 (Figs 16a, 16b) can generate, and in-fill or morph the remaining background to produce an image of the terrain, such as that shown in Fig. 17.
Fig. 17 shows an exemplary terrain which has been surveyed, the terrain including a river 190, the river 190 being in a valley with sides 192, 194 raising upwardly from the river 190. Once the ground has been modelled, design templates of equipment carried by the vehicle 160 (or any other vehicle, aircraft etc) can be overlayed over the image to assess which type of equipment is required to cross the obstacle, such as the river 190. The surveying operation can be done discretely and in a very short time compared with conventional survey techniques. Such conventional techniques would typically be to deploy a number of soldiers to survey the area manually and report back. However, with the apparatus 100 deployed on the vehicle 160 the survey can be done quicker, more accurately and more safely, without substantial risk to human life.
It is possible to conduct multiple surveys with the vehicle 160 in one or more locations, with the data from each survey being integrated to give a more accurate overall survey of the surrounding area.
Furthermore, if the arm 162 was disposed at the front of the vehicle 160 as shown in Figs 18a and 18b, the apparatus 100 can be used to check the profile of the ground in front of the vehicle 160. Thus, the profile of the ground could be shown in profile and plan views as illustrated in Figs 19a, and 19b respectively. Alternatively, or additionally, the software on the computer 120 could be used to generate a head-up video display to which the driver of the vehicle 160 could refer. Fig. 20 illustrates an example of the type of head-up display which could be generated. The heading of the tank (measured by the fluxgate compass) is displayed, with the range to and height of the ground (and any obstructions) in front of the vehicle also being displayed. The height displayed could be the height relative to the vehicles' position, or could be the absolute height obtained from the GPS 182.
Figs 21 to 23 illustrate three further applications of the apparatus 100. Fig. 21 illustrates how to calculate the height between two points A and B (indicated by crosses in Fig. 21) . The user will select the points A and B and then measure the range to them using the method described above. This will give three-dimensional coordinates for each point A and B. If it is assumed that the range to each point is approximately equal (which can be checked using the measured ranges) and that the x co-ordinates for each point are approximately equal (this can be done using the display of x, y and z co-ordinates displayed on the screen) , then the height from A to B is given by subtracting their respective y coordinates. This can then be displayed within a separate window within the screen, for example.
Fig. 22 illustrates the technique used to measure the height and distance between two points A and B. The range to A and B are first measured using the apparatus 100 as described above. The slope from A to B, the horizontal difference between A and B and the gradient of A to B are then calculated, the results being overlayed on the screen.
Fig. 23 illustrates how a rock face or the like may be profiled. Range measurements are taken at intervals along the profile (indicated by crosses in Fig. 23) . The height of each measurement will be calculated from either the inclinometer reading or can be determined using the GPS 182. Thus, a rock profile may be produced, as shown in Fig. 23.
While the above is a description of the typical applications which the survey apparatus of the present invention may be used for, it will be apparent to those skilled in the art the full range of applications of the survey apparatus disclosed herein, and the present invention is not limited to the examples discussed.
Thus, there is provided a survey apparatus and method which provides for remote control operation using a video camera to relay images back to a host computer in real-time. The image on the host computer allows the user to select particular objects of interest within the surveyed area and measure the range to these objects. The apparatus can also be used to determine rock profiles, heights between two points, the position of certain objects and the like.
Modifications and improvements may be made to the foregoing without departing from the scope of the present invention.

Claims

1. A survey apparatus comprising a range finder, a camera and a processor capable of processing image and range signals, wherein the camera facilitates aiming of the range finder.
2. A survey apparatus according to claim 1, wherein the camera comprises a video camera.
3. A survey apparatus according to either preceding claim, wherein the camera comprises a digital camera.
4. A survey apparatus according to any preceding claim, wherein the apparatus includes a display device to allow a user of the apparatus to view a target area using the camera.
5. A survey apparatus according to claim 4, wherein the display device comprises a VGA monitor.
6. A survey apparatus according to any preceding claim, wherein the processor comprises a computer.
7. A survey apparatus according to any preceding claim, wherein the range finder comprises a laser range finder.
8. A survey apparatus according to any preceding claim, wherein the range finder is bore-sighted with the camera.
9. A survey apparatus according to any preceding claim, wherein the apparatus includes a pan and tilt unit for panning and tilting of the range finder and/or camera.
10. A survey apparatus according to claim 9, wherein the pan and tilt unit comprises a first motor for panning of the range finder and/or camera, and a second motor for tilting of the range finder and/or camera.
11. A survey apparatus according to either claim 9 or claim 10, wherein operation of the first and second motors is controlled by the processor.
12. A survey apparatus according to any one of claims 9 to 11, wherein the pan and tilt unit includes first and second digital encoders for measuring the angles of pan and tilt.
13. A survey apparatus according to claim 12, wherein the outputs of the first and second encoders are fed to the processor.
14. A survey apparatus according to claim 13, wherein a feedback loop is provided wherein the motors are capable of being operated to pan and tilt the range finder and/or camera through the generated horizontal and vertical angles, and the encoders are capable of verifying the angles moved to verify that the range finder and/or camera were panned and tilted through the correct angles.
15. A survey apparatus according to any one of claims 12 to 14, wherein the first and second encoders are used to calculate the bearing to the target.
16. A survey apparatus according to according to any preceding claim, wherein the image is digitised.
17. A survey apparatus according to claim 16, wherein the image comprises a plurality of pixels.
18. A survey apparatus according to claim 17, wherein the reference point comprises a pixel within the target area.
19. A survey apparatus according to any preceding claim, wherein the reference point comprises a centre point of the target area.
20. A survey apparatus according to any one of claims 16 to 19, wherein the target is selected by selecting a pixel within the target.
21. A survey apparatus according to any preceding claim, wherein the survey apparatus includes a compass and an inclinometer and/or gyroscope.
22. A survey apparatus according to claim 21, wherein the compass comprises a digital fluxgate compass.
23. A survey apparatus according to either claim 21 or claim 22, wherein signals from the compass, inclinometer and/or gyroscope are processed to provide data to the processor.
24. A survey apparatus according to any preceding claim, wherein the survey apparatus further includes a position fixing system for identifying the geographical position of the apparatus.
25. A survey apparatus according to claim 24, wherein the position fixing system comprises a Global Positioning System.
26. A survey apparatus according to claim 25, wherein the Global Positioning System includes a Differential Global Positioning System.
27. A survey apparatus according to either one of claims 24 to 26, wherein the signal from the position fixing system is processed to provide data to the processor.
28. A survey apparatus according to any preceding claim, wherein the survey apparatus is mounted on a mounting device.
29. A survey apparatus according to claim 28, wherein the mounting device comprises a tripod stand.
30. A survey apparatus according to any preceding claim, wherein the apparatus can is mounted on an elevating platform, telescopic elevating tube, telescopic arm or robotic arm.
31. A survey apparatus according to claim 30, wherein the elevating platform, telescopic elevating tube, telescopic arm or robotic arm is capable of 360┬░ rotation.
32. A survey apparatus according to either claim 29 or claim 30, wherein the elevating platform, telescopic elevating tube, telescopic arm or robotic arm is mounted on a vehicle.
33. A survey apparatus according to claim 32, wherein the apparatus allows data gathering from within the vehicle to construct a digital terrain model of the terrain surrounding the vehicle.
34. A method of measuring the range to a target, the method comprising the steps of providing a camera to view a target area; providing a range finder; using the camera to produce an image of the target area; selecting the target within the target area; generating horizontal and vertical angles between a reference point and the target; and moving the range finder and/or camera, if required, through the generated horizontal and vertical angles to measure the range to the target.
35. A method according to claim 34, wherein the camera comprises a video camera.
36. A method according to either claim 34 or claim 35, wherein the camera comprises a digital camera.
37. A method according to any preceding claim, wherein the apparatus includes a display device to allow a user of the apparatus to view a target area using the camera.
38. A method according to claim 37, wherein the display device comprises a VGA monitor.
39. A method according to any one of claims 34 to 38, wherein the processor comprises a computer.
40. A method according to any one of claims 34 to 39, wherein the range finder comprises a laser range finder.
41. A method according to any one of claims 34 to 40, wherein the range finder is bore-sighted with the camera.
42. A method according to any one of claims 34 to 41, wherein the image is digitised.
43. A method according to claim 42, wherein the image comprises a plurality of pixels.
44. A method according to claim 43, wherein the reference point comprises a pixel within the target area.
45. A method according to any one of claims 34 to 43, wherein the reference point comprises a centre point of the target area.
46. A method according to any one of claims 42 to 45, wherein the target is selected by selecting a pixel within the target.
47. A method according to claim 46, wherein the target pixel is selected using a mouse pointer.
48. A method according to any one of claims 34 to 47, wherein the method comprises the further steps of obtaining a focal length of the camera; obtaining a field of view of the camera; calculating the principal distance of the camera; obtaining the horizontal offset and vertical offset between an axis of the camera and an axis of the laser; calculating the horizontal and vertical offsets in terms of pixels; calculating the difference between the horizontal and vertical offsets in terms of pixel and the x and y coordinates of the target pixel; and calculating the horizontal and vertical angles.
49. A method according to any one of claims 34 to 48, wherein the apparatus includes a pan and tilt unit for panning and tilting of the range finder and/or camera.
50. A method according to claim 49, wherein the pan and tilt unit comprises a first motor for panning of the range finder and/or camera, and a second motor for tilting of the range finder and/or camera.
51. A method according to either claim 49 or claim 50, wherein operation of the first and second motors is controlled by the processor.
52. A method according to any one of claims 49 to 51, wherein the pan and tilt unit includes first and second digital encoders for measuring the angles of pan and tilt.
53. A method according to claim 52, wherein the outputs of the first and second encoders is fed to the processor.
54. A method according to claim 53, wherein a feedback loop is provided wherein the motors are operated to pan and tilt the range finder and/or camera through the generated horizontal and vertical angles, and the encoders are used to check the angles to ensure that the range finder and/or camera were panned and tilted through the correct angles.
55. A method according to any one of claims 48 to 54, the method comprising the further steps of instructing the pan and tilt unit to pan and tilt the range finder and/or camera through the vertical and horizontal angles; measuring the horizontal and vertical angles using the encoders; verifying that the angles through which the range finder and/or camera are moved is correct; obtaining horizontal and/or vertical correction angles by subtracting the measured horizontal and vertical angles from the calculated horizontal and vertical angles; adjusting the pan and tilt of the range finder and/or camera if necessary; and firing the range finder to obtain the range to the target .
PCT/GB1999/001361 1998-05-15 1999-05-17 Survey apparatus WO1999060335A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU39374/99A AU3937499A (en) 1998-05-15 1999-05-17 Survey apparatus
EP99922262A EP1078221A1 (en) 1998-05-15 1999-05-17 Survey apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9810405.2 1998-05-15
GBGB9810405.2A GB9810405D0 (en) 1998-05-15 1998-05-15 Survey apparatus

Publications (1)

Publication Number Publication Date
WO1999060335A1 true WO1999060335A1 (en) 1999-11-25

Family

ID=10832081

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1999/001361 WO1999060335A1 (en) 1998-05-15 1999-05-17 Survey apparatus

Country Status (4)

Country Link
EP (1) EP1078221A1 (en)
AU (1) AU3937499A (en)
GB (1) GB9810405D0 (en)
WO (1) WO1999060335A1 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1219925A2 (en) * 2000-12-28 2002-07-03 Kabushiki Kaisha Topcon Surveying apparatus
EP1273882A2 (en) * 2001-07-03 2003-01-08 Prüftechnik Dieter Busch Ag Device and method for measuring premises and machines
EP1314960A1 (en) * 2001-11-22 2003-05-28 Leica Geosystems AG Electronic display and control device for a measuring instrument
WO2004036145A1 (en) * 2002-10-12 2004-04-29 Leica Geosystems Ag Electronic display and control device for a measuring device
FR2869112A1 (en) * 2004-04-20 2005-10-21 Airbus France Sas THREE DIMENSION MEASURING SYSTEM
EP1605231A1 (en) * 2004-06-09 2005-12-14 Kabushiki Kaisha TOPCON Surveying apparatus
US7339611B2 (en) 2000-05-20 2008-03-04 Trimble Jena Gmbh Method and arrangement for carrying out an information flow and data flow for geodetic instruments
WO2008089789A1 (en) * 2007-01-25 2008-07-31 Trimble Ab Aiming of a geodetic instrument
US7594441B2 (en) 2007-09-27 2009-09-29 Caterpillar Inc. Automated lost load response system
EP2112470A1 (en) * 2007-02-12 2009-10-28 Qifeng Yu A photogrammetric method using folding optic path transfer for an invisible target of three-dimensional position and posture
US7633610B2 (en) 2003-03-21 2009-12-15 Leica Geosystems Ag Method and device for image processing in a geodetic measuring instrument
US7647197B2 (en) 2002-08-09 2010-01-12 Surveylab Group Limited Mobile instrument, viewing device, and methods of processing and storing information
DE102010011528A1 (en) * 2010-03-15 2011-09-15 Ulrich Clauss Receiving arrangement for extracting geometric and photometric object data of e.g. objects, in accident site, has slope unit roatably coordinating at angle and connected with line camera, so that slope unit is moved in rotational axes
CN102878978A (en) * 2012-08-31 2013-01-16 深圳华盛昌机械实业有限公司 Method and device for generating project blueprint by remote control distance measurement
EP2479990A3 (en) * 2005-06-23 2013-01-23 Israel Aerospace Industries Ltd. A system and method for tracking moving objects
WO2013086635A1 (en) * 2011-12-15 2013-06-20 Atkinson Darren Glen Locating and relocating device
WO2016120044A1 (en) * 2015-01-27 2016-08-04 Bayerische Motoren Werke Aktiengesellschaft Measurement of a dimension on a surface
US9464408B2 (en) 2007-10-26 2016-10-11 Deere & Company Three dimensional feature location and characterization from an excavator
WO2017026956A3 (en) * 2015-08-13 2017-04-20 Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi An artillery surveying device
US10192139B2 (en) 2012-05-08 2019-01-29 Israel Aerospace Industries Ltd. Remote tracking of objects
US10212396B2 (en) 2013-01-15 2019-02-19 Israel Aerospace Industries Ltd Remote tracking of objects
US10551474B2 (en) 2013-01-17 2020-02-04 Israel Aerospace Industries Ltd. Delay compensation while controlling a remote sensor

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5077557A (en) * 1988-07-06 1991-12-31 Wild Leitz Ag Surveying instrument with receiver for satellite position-measuring system and method of operation
EP0481278A1 (en) * 1990-10-15 1992-04-22 IBP Pietzsch GmbH Method and measuring device for locating points in space
US5379045A (en) * 1993-09-01 1995-01-03 Trimble Navigation Limited SATPS mapping with angle orientation calibrator
EP0661519A1 (en) * 1993-12-28 1995-07-05 Kabushiki Kaisha Topcon Surveying instrument

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5077557A (en) * 1988-07-06 1991-12-31 Wild Leitz Ag Surveying instrument with receiver for satellite position-measuring system and method of operation
EP0481278A1 (en) * 1990-10-15 1992-04-22 IBP Pietzsch GmbH Method and measuring device for locating points in space
US5379045A (en) * 1993-09-01 1995-01-03 Trimble Navigation Limited SATPS mapping with angle orientation calibrator
EP0661519A1 (en) * 1993-12-28 1995-07-05 Kabushiki Kaisha Topcon Surveying instrument

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7339611B2 (en) 2000-05-20 2008-03-04 Trimble Jena Gmbh Method and arrangement for carrying out an information flow and data flow for geodetic instruments
DE10066379B4 (en) * 2000-05-20 2008-07-10 Trimble Jena Gmbh Method and device for realizing an information and data flow for geodetic devices
EP1219925A2 (en) * 2000-12-28 2002-07-03 Kabushiki Kaisha Topcon Surveying apparatus
EP1219925A3 (en) * 2000-12-28 2004-06-02 Kabushiki Kaisha Topcon Surveying apparatus
EP1273882A2 (en) * 2001-07-03 2003-01-08 Prüftechnik Dieter Busch Ag Device and method for measuring premises and machines
EP1273882A3 (en) * 2001-07-03 2008-09-17 Prüftechnik Dieter Busch Ag Device and method for measuring premises and machines
EP1314960A1 (en) * 2001-11-22 2003-05-28 Leica Geosystems AG Electronic display and control device for a measuring instrument
EP1314959A1 (en) * 2001-11-22 2003-05-28 Leica Geosystems AG Electronic display and control device for a measuring instrument
US7647197B2 (en) 2002-08-09 2010-01-12 Surveylab Group Limited Mobile instrument, viewing device, and methods of processing and storing information
US8024151B2 (en) 2002-08-09 2011-09-20 Surveylab Group Ltd. Mobile instrument, viewing device, and methods of processing and storing information
WO2004036145A1 (en) * 2002-10-12 2004-04-29 Leica Geosystems Ag Electronic display and control device for a measuring device
AU2003229660C1 (en) * 2002-10-12 2009-02-19 Leica Geosystems Ag Electronic display and control device for a measuring device
AU2003229660B2 (en) * 2002-10-12 2008-09-18 Leica Geosystems Ag Electronic display and control device for a measuring device
US7342650B2 (en) 2002-10-12 2008-03-11 Leica Geosystems Ag Electronic display and control device for a measuring device
US7633610B2 (en) 2003-03-21 2009-12-15 Leica Geosystems Ag Method and device for image processing in a geodetic measuring instrument
US7355724B2 (en) 2004-04-20 2008-04-08 Airbus France Three-dimensional measurement system
EP1589354A3 (en) * 2004-04-20 2007-06-27 Airbus France System for three-dimensional measurements
EP1589354A2 (en) 2004-04-20 2005-10-26 Airbus France System for three-dimensional measurements
FR2869112A1 (en) * 2004-04-20 2005-10-21 Airbus France Sas THREE DIMENSION MEASURING SYSTEM
US7301617B2 (en) 2004-06-09 2007-11-27 Kabushiki Kaisha Topcon Surveying apparatus
EP1605231A1 (en) * 2004-06-09 2005-12-14 Kabushiki Kaisha TOPCON Surveying apparatus
US8792680B2 (en) 2005-06-23 2014-07-29 Israel Aerospace Industries Ltd. System and method for tracking moving objects
EP2479990A3 (en) * 2005-06-23 2013-01-23 Israel Aerospace Industries Ltd. A system and method for tracking moving objects
WO2008089789A1 (en) * 2007-01-25 2008-07-31 Trimble Ab Aiming of a geodetic instrument
US7930835B2 (en) 2007-01-25 2011-04-26 Trimble Ab Aiming of a geodetic instrument
EP2112470A1 (en) * 2007-02-12 2009-10-28 Qifeng Yu A photogrammetric method using folding optic path transfer for an invisible target of three-dimensional position and posture
EP2112470A4 (en) * 2007-02-12 2014-05-21 Qifeng Yu A photogrammetric method using folding optic path transfer for an invisible target of three-dimensional position and posture
US7594441B2 (en) 2007-09-27 2009-09-29 Caterpillar Inc. Automated lost load response system
US9464408B2 (en) 2007-10-26 2016-10-11 Deere & Company Three dimensional feature location and characterization from an excavator
DE102010011528A1 (en) * 2010-03-15 2011-09-15 Ulrich Clauss Receiving arrangement for extracting geometric and photometric object data of e.g. objects, in accident site, has slope unit roatably coordinating at angle and connected with line camera, so that slope unit is moved in rotational axes
WO2013086635A1 (en) * 2011-12-15 2013-06-20 Atkinson Darren Glen Locating and relocating device
CN104169735A (en) * 2011-12-15 2014-11-26 阿特金森音频有限公司 Locating and relocating device
US8991062B2 (en) 2011-12-15 2015-03-31 Atkinson Audio Inc. Locating and relocating device
US9322641B2 (en) 2011-12-15 2016-04-26 Atkinson Audio Inc. Locating and relocating device
US10192139B2 (en) 2012-05-08 2019-01-29 Israel Aerospace Industries Ltd. Remote tracking of objects
CN102878978B (en) * 2012-08-31 2014-12-24 深圳华盛昌机械实业有限公司 Method for generating project blueprint by remote control distance measurement
CN102878978A (en) * 2012-08-31 2013-01-16 深圳华盛昌机械实业有限公司 Method and device for generating project blueprint by remote control distance measurement
US10212396B2 (en) 2013-01-15 2019-02-19 Israel Aerospace Industries Ltd Remote tracking of objects
US10551474B2 (en) 2013-01-17 2020-02-04 Israel Aerospace Industries Ltd. Delay compensation while controlling a remote sensor
WO2016120044A1 (en) * 2015-01-27 2016-08-04 Bayerische Motoren Werke Aktiengesellschaft Measurement of a dimension on a surface
CN107003409A (en) * 2015-01-27 2017-08-01 宝马股份公司 The measurement of size on the surface
US10611307B2 (en) 2015-01-27 2020-04-07 Bayerische Motoren Werke Aktiengesellschaft Measurement of a dimension on a surface
WO2017026956A3 (en) * 2015-08-13 2017-04-20 Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi An artillery surveying device

Also Published As

Publication number Publication date
EP1078221A1 (en) 2001-02-28
GB9810405D0 (en) 1998-07-15
AU3937499A (en) 1999-12-06

Similar Documents

Publication Publication Date Title
US7184088B1 (en) Apparatus and method for obtaining 3D images
EP1078221A1 (en) Survey apparatus
Petrie et al. Terrestrial laser scanners
US20050057745A1 (en) Measurement methods and apparatus
US9322652B2 (en) Stereo photogrammetry from a single station using a surveying instrument with an eccentric camera
CA2502012C (en) Electronic display and control device for a measuring device
US9958268B2 (en) Three-dimensional measuring method and surveying system
US9933512B2 (en) Method and geodetic device for surveying at least one target
US7742176B2 (en) Method and system for determining the spatial position of a hand-held measuring appliance
US6031606A (en) Process and device for rapid detection of the position of a target marking
CN110737007B (en) Portable positioning device and method for obtaining geospatial position
KR101703774B1 (en) Calibration method for a device having a scan function
CN109416399A (en) 3-D imaging system
Mills et al. Geomatics techniques for structural surveying
Puente et al. Land-based mobile laser scanning systems: a review
EP3514489B1 (en) Surveying device and surveying method
CN113167581A (en) Measuring method, measuring system and auxiliary measuring instrument
JP7378545B2 (en) Target equipment and surveying method
Cramer et al. Ultra-high precision UAV-based LIDAR and dense image matching
EP3353492B1 (en) Device and method to locate a measurement point with an image capture device
US20220099442A1 (en) Surveying System
JP7161298B2 (en) target device, surveying system
WO2024071287A1 (en) Twist ring polygon mirror, light transmitter, and surveying system
JP2024050347A (en) Twist-ring polygon mirror, light transmitter, and surveying system
JP2023048409A (en) Survey system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: KR

WWE Wipo information: entry into national phase

Ref document number: 1999922262

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 09700603

Country of ref document: US

WWP Wipo information: published in national office

Ref document number: 1999922262

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWW Wipo information: withdrawn in national office

Ref document number: 1999922262

Country of ref document: EP