This application is a reissue of U.S. patent application Ser. No. 15/448,962, now U.S. Pat. No. 9,962,927.
BACKGROUND OF THE INVENTION1. Field of the Invention
The present disclosure relates to a position detection apparatus, a droplet discharging apparatus, a method for detecting a position, and a medium.
2. Description of the Related Art
Printers that convey a sheet and discharge ink at times when the sheet reaches an image forming position to form an image, have been known. For printers, needs for smaller sizes and portability have been increasing as downsized notebook PCs and smart devices have become popular. As such, printers having a sheet conveyance system omitted to be downsized (referred to as “handy mobile printers (HMPs) ”, below) are coming into practical use. An HMP does not have the sheet conveyance system installed; the HMP is moved by a person to scan the surface of a sheet, and to discharge ink.
The HMP detects its own position on the surface of the sheet, and discharges the ink to form an image depending on the position. As a mechanism for detecting the position, a conventional HMP has been known that has two navigation sensors mounted on the bottom face (see, for example, Patent document 1). The navigation sensor is a sensor that optically detects fine edges on the surface of a sheet, to detect the amount of movement every cycle time. Having the two navigation sensors mounted makes it possible for the HMP to detect the angle of rotation in the direction horizontal to the surface of the sheet.
However, such a conventional HMP has a problem; it is difficult to reduce the size of the bottom part. First, even if having only one navigation sensor mounted, the HMP can detect a position. Two sensors are required for calculating the angle of rotation of the HMP with respect to the surface of the sheet, and for calculating the position based on the angle of the rotation. Moreover, to improve the precision of the detected angle of rotation, it is preferable to have a certain interval between the two navigation sensors. For these reasons, it is difficult to reduce the size of the bottom part of a conventional HMP.
SUMMARY OF THE INVENTIONAccording to an embodiment, a position detection apparatus configured to detect a position on a movement surface of a mounted object having the position detection apparatus mounted thereon, includes a moved amount detector configured to detect an amount of movement on the movement surface; a posture detector configured to detect at least a posture of the mounted object on the movement surface; and a position calculator configured to calculate the position of the mounted object, based on the amount of movement and the posture.
BRIEF DESCRIPTION OF DRAWINGSFIGS. 1A-1D are examples of diagrams illustrating an overview of a configuration of an HMP according to an embodiment;
FIGS. 2A-2B are examples of diagrams schematically illustrating image forming by an HMP;
FIG. 3 is an example of a hardware configuration diagram of an HMP;
FIG. 4 is an example of a diagram illustrating a configuration of a controller;
FIG. 5 is an example of a diagram illustrating principles of detecting angular velocity by a gyro sensor;
FIG. 6 is a diagram illustrating an example of a hardware configuration of a navigation sensor;
FIGS. 7A-7B are examples of diagrams illustrating a method for detecting an amount of movement by a navigation sensor;
FIG. 8 is an example of a configuration diagram of an IJ recording head drive circuit;
FIGS. 9A-9B are examples of plan views of an HMP;
FIGS. 10A-10B are examples of diagrams illustrating a coordinate system of an HMP and a method for calculating a position;
FIG. 11 is an example of a diagram illustrating a method for calculating an angle of rotation dθ of an HMP generated during image forming;
FIG. 12 is an example of a diagram illustrating a relationship between targeting discharge positions and nozzle positions;
FIG. 13 is an example of a flowchart illustrating operational steps of an image data output device and an HMP;
FIGS. 14A-14F are examples of comparative diagrams illustrating image formable areas in case of two navigation sensors;
FIGS. 15A-15F are examples of diagrams illustrating image formable areas in case of one navigation sensor;
FIG. 16 is an example of a diagram illustrating an arrangement of a navigation sensor;
FIGS. 17A-17C are examples of diagrams illustrating arrangements of a gyro sensor;
FIG. 18 is an example of a diagram illustrating a posture of an HMP detected by a gyro sensor;
FIG. 19 is an example of a diagram illustrating change of the distance between a sensor and a sheet, and the resolution of the amount of movement;
FIGS. 20A-20B are examples of diagrams illustrating an attached position of a navigation sensor;
FIGS. 21A-21C are examples of diagrams illustrating the amount of floating of a navigation sensor over a print medium; and
FIG. 22 is an example of a flowchart illustrating operational steps of an image data output device and an HMP (second application example).
DETAILED DESCRIPTION OF THE EMBODIMENTSIn the following, embodiments will be described with reference to the drawings.
According to an embodiment, it is possible to provide a position detection apparatus whose size of the bottom part can be reduced.
First Application ExampleFirst, general features of a handy mobile printer (referred to as an “HMP”, below) will be described according to the embodiment usingFIGS. 1A-1D.FIGS. 1A-1D are examples of diagrams illustrating an overview of a configuration of the HMP according to the embodiment.FIG. 1A illustrates a configuration diagram of aconventional HMP20 illustrated for comparison. Theconventional HMP20 includes anIJ recording head24 and two navigation sensors30 (referred to as the “navigation sensors S0-S1” when the distinction is required, below).
FIG. 1B illustrates an imageformable area501 of theconventional HMP20. TheHMP20 inFIG. 1A has anIJ recording head24 on the left side, and the two navigation sensors S0-S1arranged vertically on the right side. The interval between the navigation sensor S1and the lower end of thenozzle61 is A mm, and the interval between thenozzle61 and the navigation sensors S0-S1is B mm. To prevent the navigation sensors S0-S1from going out of aprint medium12, theHMP20 cannot be moved to the area of theprint medium12 within B mm from the right end. Also, since the interval is A mm between the navigation sensor S, and the lower end of thenozzle61, theHMP20 cannot be moved to the area of theprint medium12 within A mm from the lower end. Thus, areas where printing cannot be executed are generated on the lower part and the side part of theprint medium12.
FIG. 10 illustrates a configuration diagram of anHMP20 in the embodiment. TheHMP20 in the embodiment includes anIJ recording head24, a single navigation sensor S0, and agyro sensor31.FIG. 1D illustrates an imageformable area501 of theHMP20 in the embodiment. It is assumed that the interval between the navigation sensor S1and the lower end of thenozzle61 is A mm. To prevent the navigation sensor S0from going out of theprint medium12, theHMP20 cannot be moved to the area of theprint medium12 within A mm from the lower end. However, since the interval in the lateral direction between thenozzle61 and the navigation sensor S0is zero, theHMP20 can be moved from the left end to the right end of theprint medium12. Thus, the non-printable area is generated only on the lower part of theprint medium12 as illustrated inFIG. 1D.
As can be clearly seen by comparingFIG. 1B withFIG. 1D, having thegyro sensor31 mounted on theHMP20 makes it possible to reduce the number of navigation sensors to one, and hence, to reduce the size of the bottom face. Consequently, the imageformable area501 can be extended.
<About Terms>
The “size of the bottom face” is the size of an area that surrounds thenavigation sensor30 and thenozzle61, or the size of the bottom face of theHMP20 that cannot be made smaller anymore due to the restriction of the surrounding area. An actual size of the bottom face of theHMP20 may be larger than the area surrounding thenavigation sensor30 and thenozzle61, and may be determined taking operability, design, and the like into consideration.
A “mounted object” refers to an object having a position detection apparatus mounted. A “mounted object” may be an object by which a position can be detected on a movement surface. For example, theHMP20 is an example of a mounted object. Also, since the position detection apparatus can detect a moved distance, a distance measuring device may be an example of a mounted object.
The “movement surface” just needs to be a surface on which theHMP20 can move, which includes a plane and a curved surface. Specifically, theprint medium12 is a movement surface, but it is not limited as such.
Moreover, a “posture of an object” means the degrees of freedom representing the angles of rotation among six degrees of freedom of the object (a rigid body), namely, the angles of rotation around three axes that pass through the center of gravity of the rigid body, and perpendicular to each other. Among these, the posture of the object in a plane is represented by the angle of rotation around an axis perpendicular to the plane.
Also, “calculating a position” means obtaining information about a position by executing calculation on certain data, and “detecting a position” means obtaining information about a position regardless of the process. However, both are the same in terms of obtaining information about a position, and will not be strictly distinguished in the embodiments.
<Image Formation byHMP20>
FIGS. 2A-2B are examples of diagrams schematically illustrating image forming by theHMP20. Image data is transmitted to theHMP20, for example, from an imagedata output device11 such as a smart phone and a PC (Personal Computer). The user grips theHMP20 freehand and moves theHMP20 to scan the surface of the print medium12 (for example, a standard size sheet or a notebook), while keeping theHMP20 not to float over theprint medium12.
As will be described in detail later, theHMP20 detects a position by the navigation sensor S0and thegyro sensor31, and when moved to a targeting discharge position, discharges ink of a color to be discharged onto the targeting discharge position. Since the place on which the ink has been already discharged is masked (not to be the targeting discharge position anymore), the user can move theHMP20 to scan theprint medium12 in any direction to form the image.
It is preferable to maintain theHMP20 so as not to float over theprint medium12, because the navigation sensor S0detects the amount of movement by using light reflected from theprint medium12. If theHMP20 floats over theprint medium12, the reflected light cannot be detected, and the amount of movement cannot be detected. Also, the navigation sensor S0gone out of theprint medium12 may not be able to detect the reflected light due to the thickness of theprint medium12, or even if the reflected light is detected, the position may be shifted. Therefore, it is preferable to keep the navigation sensor S0on theprint medium12 while scanning, and to have thenozzle61 and the navigation sensor S0located together on theprint medium12 as described above.
<Example of Configuration>
FIG. 3 is an example of a hardware configuration diagram of theHMP20. TheHMP20 is an example of a droplet discharging apparatus or an image forming apparatus that forms an image on aprint medium12. The overall operation of theHMP20 is controlled by acontroller25 to which a communication I/F (Interface)27, an IJ (Inkjet) recordinghead drive circuit23, an OPU (Operation Panel Unit)26, a ROM (Read-Only Memory)28, a DRAM (Dynamic Random Access Memory)29, thenavigation sensor30, and thegyro sensor31 are electrically connected. TheHMP20 further includes apower source22 and apower source circuit21 to be driven by electric power. The electric power generated by thepower source circuit21 is supplied to the communication I/F27, the IJ recordinghead drive circuit23, theOPU26, theROM28, theDRAM29, theIJ recording head24, thecontroller25, thenavigation sensor30, and thegyro sensor31 by wiring designated bydotted lines22a.
A battery is mainly used as thepower source22. Alternatively, a solar cell, a commercial power source (an AC power supply), a fuel cell, or the like may be used. Thepower source circuit21 distributes the electric power supplied by thepower source22 to the parts of theHMP20. Thepower source circuit21 also boosts or steps down the voltage of thepower source22 to be suitable for the respective parts of theHMP20. In addition, if thepower source22 is a rechargeable battery, thepower source circuit21 detects a connection to an AC power supply and connects the AC power supply with a charge circuit of the battery to charge thepower source22.
The communication I/F27 receives of image data from the imagedata output device11 such as a smartphone and a PC (Personal Computer). The communication I/F27 is, for example, a communication device corresponding to communication standards such as a wireless LAN, Bluetooth (registered trademark), NFC (Near Field Communication), infrared communication, 3G (cellular phone), and LTE (Long Term Evolution). Other than such wireless communication standards, the communication I/F27 may be a communication device corresponding to cable communication using a wired LAN, a USB cable, and the like.
TheROM28 stores firmware to control the hardware of theHMP20, drive waveform data of the IJ recording head24 (data specifying voltage change to discharge droplets), initial setting data of theHMP20, and the like.
TheDRAM29 is used for storing image data received by the communication I/F27, and storing the firmware loaded from theROM28. Therefore, theDRAM29 is used as a work memory when theCPU33 runs the firmware.
Thenavigation sensor30 is a sensor to detect the amount of movement of theHMP20 every predetermined cycle time. Thenavigation sensor30 includes, for example, a light source such as a light-emitting diode (LED) and a laser, and an imaging sensor to capture an image of theprint medium12. When having theHMP20 scan over theprint medium12, fine edges on theprint medium12 are detected (imaged) one after another, and the amount of movement is obtained by analyzing the distance between the edges. In the embodiment, only onenavigation sensor30 is mounted on the bottom face of theHMP20. Conventionally, two sensors are mounted. However, theHMP20 having twonavigation sensors30 may be described for the sake of comparison. Note that an acceleration sensor for more axes may be used as thenavigation sensor30, and theHMP20 may detect the amount of movement of theHMP20 only by the acceleration sensor.
Thegyro sensor31 is a sensor to detect angular velocity when theHMP20 is rotated around an axis perpendicular to theprint medium12. This will be described in detail later.
TheOPU26 includes an LED to display a state of theHMP20, and a switch for the user to make theHMP20 start image forming. However, the elements are not limited to these; a liquid crystal display may be included, and a touch panel may be further included. Also, a voice input function may be provided.
The IJ recordinghead drive circuit23 generates a drive waveform (voltage) to drive theIJ recording head24 by using the drive waveform data described above. A drive waveform depending on the size of ink droplets and the like can be generated.
TheIJ recording head24 is a head for discharging ink. TheIJ recording head24 illustrated in the figure is capable of discharging four colors (CMYK) of ink, but the color may be monochrome, or five or more colors may be discharged. One row (or more rows) of nozzles61 (discharging parts) may be arranged for each color for discharging the ink. Also, ink discharging may be implemented by a piezoelectric system, a thermal system, or other than those. TheIJ recording head24 is a functional part that discharges or jets out liquid from thenozzles61. Liquid to be discharged just needs to have an appropriate viscosity and surface tension so as to be discharged from theIJ recording head24.
The liquid is not specifically limited, but preferably has the viscosity less than or equal to 30 mPa·s at normal temperature and normal pressure, or when heated or cooled. More specifically, available liquid may be solution, suspension, emulsion, or the like that includes solvent such as water and organic solvent; colorants such as dye and pigment; functional materials such as polymerizable compounds, resin, and surfactants; biocompatible materials such as DNA, amino acid, protein, and calcium; and edible materials such as natural colorant. Such liquid may be used as, for example, ink for the inkjet; surface treatment liquid; liquid for forming elements such as electronic devices and light-emitting devices, and resist patterns of an electronic circuit; and material liquid for three-dimensional molding; and the like.
Thecontroller25 includes theCPU33 to control theHMP20 as a whole. Based on the amount of movement detected by thenavigation sensor30 and the angular velocity detected by thegyro sensor31, thecontroller25 determines the positions of thenozzles61 of theIJ recording head24, an image to be formed depending on the positions, and whether to have eachnozzle61 discharge the ink at the position. Thecontroller25 will be described in detail next.
FIG. 4 is an example of a diagram illustrating a configuration of thecontroller25. Thecontroller25 includes anSoC50 and an ASIC/FPGA40. The ASIC/FPGA40 and theSoC50 communicates with each other viabuses46 and47. The “ASIC/FPGA”40 is meant to be designed in one of the packaging technologies of ASIC, FPGA, and any other packaging technologies. Also, theSoC50 and the ASIC/FPGA40 may be implemented on a single chip or board, not divided into separate chips. The number of chips or boards may be three or more.
TheSoC50 includes functions of aCPU33, aposition calculation circuit34, a memory controller (CTL)35, and aROM controller36, which are connected with each other via thebus47. Note that elements included in theSoC50 are not limited to these.
Also, the ASIC/FPGA40 includes anImage RAM37, aDMAC38, arotator39, an interruptcontroller41, a navigation sensor I/F42, a print/sense timing generator43, an IJrecording head controller44, and a gyro sensor I/F45, which are connected with each other via thebus46. Note that elements included in the ASIC/FPGA40 are not limited to these.
TheCPU33 runs firmware (a program) loaded into theDRAM29 from theROM28, to control operations of theposition calculation circuit34, thememory controller35, and theROM controller36 in theSoC50. TheCPU33 also controls operations of theImage RAM37, theDMAC38, therotator39, the interruptcontroller41, the navigation sensor I/F42, the print/sense timing generator43, the IJrecording head controller44, the gyro sensor I/F45, and the like in the ASIC/FPGA40.
Theposition calculation circuit34 calculates the position (coordinate information) of theHMP20, based on the amount of movement detected by thenavigation sensor30 every sampling cycle, and the angular velocity detected by thegyro sensor31 every sampling cycle. The position of theHMP20 is strictly the positions of thenozzles61, which can be calculated once the position of thenavigation sensor30 is determined. In the present application example, the position of thenavigation sensor30 is assumed to be the position of the navigation sensor S0unless otherwise specified. Theposition calculation circuit34 also calculates a targeting discharge position. Note that theCPU33 may implement functions of theposition calculation circuit34 by software.
The position of thenavigation sensor30 is calculated, for example, based on a predetermined origin (the initial position of theHMP20 when image forming is started) as the reference, as will be described later. Theposition calculation circuit34 also estimates the direction of movement and acceleration based on a difference between a past position and a latest position, to predict, for example, the position of thenavigation sensor30 at the next discharge timing. This make is possible to discharge ink while preventing a delay behind the user's scanning operation.
Thememory controller35 is an interface with theDRAM29, to make a request for data to theDRAM29, to transmit the obtained firmware to theCPU33, and to transmit obtained image data to the ASIC/FPGA40.
TheROM controller36 is an interface with theROM28, to make a request for data to theROM28, and to transmit obtained data to the ASIC/FPGA40.
Therotator39 rotates the image data obtained by theDMAC38 depending on the head that discharges ink, the nozzle positions in the head, the inclination of the head due to an installation error and the like. TheDMAC38 outputs the image data after the rotation to the IJrecording head controller44.
TheImage RAM37 temporarily stores the image data obtained by theDMAC38. In other words, a certain amount of image data is buffered to be read out depending on the position of theHMP20.
The IJrecording head controller44 applies a dither process and the like to image data (bit map data), to convert the image data into a collection of points that represent an image by the sizes and density. Thus, the image data becomes data represented by the discharge positions and the sizes of the points. The IJrecording head controller44 outputs a control signal depending on the sizes of the points to the IJ recordinghead drive circuit23. As described above, the IJ recordinghead drive circuit23 generates a drive waveform (voltage) by using the drive waveform data corresponding to the control signal.
The navigation sensor I/F42 communicates with thenavigation sensor30 to receive the amount of movement ΔX′ and ΔY′, which will be described later, as information from thenavigation sensor30, and to store the values in an internal register.
The print/sense timing generator43 indicates timing at which the navigation sensor I/F42 and the gyro sensor I/F45 read information to, and indicates drive timing to the IJrecording head controller44. The cycle of the timing to read information is longer than the cycle of the timing to discharge ink. The IJrecording head controller44 determines whether to have thenozzles61 discharge the ink, and if there is a targeting discharge position to which the ink to be discharged, discharges the ink; or if not, does not discharge the ink.
The gyro sensor I/F45 obtains the angular velocity detected by thegyro sensor31 when the timing generated by the print/sense timing generator43 has come, and stores the value in the register.
The interruptcontroller41 detects that the navigation sensor I/F42 has completed communication with thenavigation sensor30, and outputs an interrupt signal to indicate the completion to theSoC50. In response to this interruption, theCPU33 obtains ΔX′ and ΔY′ having stored in the internal register by the navigation sensor I/F42. The interruptcontroller41 also has a status indication function about errors and the like. Similarly, the interruptcontroller41 detects that the gyro sensor I/F45 has completed communication with thegyro sensor31, and outputs an interrupt signal to indicate the completion to theSoC50.
<Gyro Sensor31>
FIG. 5 is an example of a diagram illustrating principles of detecting angular velocity by thegyro sensor31. When rotational movement acts on a moving object, Coriolis force is generated in a direction perpendicular to both the moving direction and to the axis of rotation of the object.
To move the object, thegyro sensor31 generates velocity v (a vector) by vibrating a MEMS (Micro Electro Mechanical System) element. When rotational movement angular velocity ω (vector) from the outside acts on the vibrating MEMS element having the mass m, the MEMS element receives the Coriolis force. The Coriolis force F can be represented as follows.
F=−2mω×v
where “x” represents the outer product of vectors, and the Coriolis force F is directed to a direction perpendicular to both the moving direction and to the axis of rotation of the body as described above. The MEMS element has, for example, an electrode having a comb-teeth-like structure, and thegyro sensor31 senses displacement caused by the Coriolis force F as a change of the electrostatic capacity. The signal representing the Coriolis force F is amplified and filtered in thegyro sensor31, and calculated as the angular velocity to be output. In other words, the angular velocity co can be taken out based on F, m, and v, which are known.
<AboutNavigation Sensor30>
FIG. 6 is a diagram illustrating an example of a hardware configuration of thenavigation sensor30. Thenavigation sensor30 includes a host I/F301, animage processor302, anLED driver303, twolenses304 and306, and animage array305. TheLED driver303 is a unified device of a control circuit and an LED, and emits LED light in response to a command from theimage processor302. Theimage array305 receives reflected LED light from theprint medium12 through thelens304. The twolenses304 and306 are disposed so that the focal point comes on to the surface of theprint medium12 optically.
Theimage array305 includes photodiodes or the like having the sensitivity at the wavelength of the LED light, to generate image data from the received LED light. Theimage processor302 obtains the image data and calculates the moved distance (ΔX′ and ΔY′ described above) of thenavigation sensor30 from the image data. Theimage processor302 outputs the calculated moved distance to thecontroller25 via the host I/F301.
The light-emitting diode (LED) used as the light source is useful for aprint medium12 having a coarse face such as paper. This is because a coarse face generates shadow, and by using the shadow as the characteristic part, it is possible to calculate the moved distance in the X-axis direction and the Y-axis direction precisely. On the other hand, for aprint medium12 whose surface is smooth or transparent, a semiconductor laser (LD) generating a laser beam may be used as the light source. This is because the semiconductor laser can form, for example, a striped pattern on theprint medium12 that can be used as the characteristic part, and the moved distance can be calculated precisely based on the characteristic part.
Next, operations of thenavigation sensor30 will be described usingFIGS. 7A-7B.FIGS. 7A-7B are examples of diagrams illustrating a method for detecting the amount of movement by thenavigation sensor30. The light emitted by theLED driver303 irradiates the surface of theprint medium12 through thelens306. The surface of theprint medium12 has fine concavities and convexities having various shapes as illustrated inFIG. 7A. Therefore, the shadows are generated in various shapes.
Theimage processor302 receives reflected light through thelens304 and theimage array305 every predetermined sampling timing, to obtain theimage data310. Theimage processor302 generates a matrix from theimage data310 by a predetermined resolution as illustrated inFIG. 7B. In other words, theimage processor302 divides theimage data310 into multiple rectangular areas. Then, theimage processor302 compares theimage data310 obtained at a current sampling timing with theimage data310 obtained at the previous sampling timing, to detect the number of rectangular areas that have been passed through, and to calculate the moved distance. Assume that theHMP20 has moved in the direction designated by ΔX inFIG. 7B. Comparing theimage data310 at t=0 with the data at t=1, a shape on the right side at t=0 matches a shape at the center at t=1. Thus, it can be understood that the shape has moved in the X-direction negatively, which means theHMP20 has moved in the X-direction positively by one square. This is the same for theimage data310 at time t=1 and t=2.
<IJ RecordingHead Drive Circuit23>
FIG. 8 is an example of a configuration diagram of the IJ recordinghead drive circuit23. First, theIJ recording head24 includesmultiple nozzles61, and each of thenozzles61 has an actuator provided. The actuator may be either of a thermal type or a piezoelectric type. The thermal type heats ink in thenozzle61 to expand the ink, and discharges a droplet of the ink from thenozzle61 by the expansion. The piezoelectric type applies pressure to the nozzle wall by a piezoelectric device to push ink out of thenozzle61, and discharges a droplet of the ink.
The IJ recordinghead drive circuit23 includes analog switches231, alevel shifter232, agradation decoder233, latches234, and ashift register235. The IJrecording head controller44 transfers image data SD constituted with serial data items for the number of thenozzles61 of the IJ recording head24 (the number of actuators is the same), to theshift register235 of the IJ recordinghead drive circuit23 by using an image data transfer clock SCK.
Having completed the transfer, the IJrecording head controller44 stores the items of the image data SD in thelatches234 provided for therespective nozzles61 by image data latch signals SLn, respectively.
After having latched the image data SD, the IJrecording head controller44 outputs a head drive waveform Vcom to discharge droplets of the ink having respective gradation levels from thenozzles61, to theanalog switch231. At this moment, the IJrecording head controller44 gives a head drive mask pattern MN as a gradation control signal to thegradation decoder233, and makes the head drive mask pattern MN transition to be selected in accordance with the timing of the drive waveform.
Thegradation decoder233 performs a logical operation on the gradation control signal and the latched image data, and thelevel shifter232 boosts a logical level voltage signal obtained by the logical operation up to a voltage level enough to drive theanalog switch231.
Theanalog switch231 receives the boosted voltage signal to be turned on or off, and this makes a drive waveform VoutN to be supplied to the actuators of the IJ recording head have a different form for therespective nozzles61. TheIJ recording head24 discharges droplets of the ink based on this drive waveform VoutN to form an image on theprint medium12.
Note that the configuration ofFIG. 8 described above is a configuration generally adopted for printers of the inkjet type. Another configuration other than the configuration inFIG. 8 may be adopted for theHMP20 as long as droplets of ink can be discharged.
<About Nozzle Positions in IJ Recording Head>
Next, nozzle positions in theIJ recording head24 will be described usingFIGS. 9A-9B.FIG. 9A is an example of a plan view of theHMP20.FIG. 9B is an example of a diagram illustrating only theIJ recording head24. The illustrated surface faces theprint medium12. TheHMP20 in the present embodiment has one navigation sensor S0. For the sake of description, S1inFIG. 9A designates a position at which the second navigation sensor would be mounted if two navigation sensors are to be mounted. If two navigation sensors S0-S1are mounted, the length between S0and S1is represented by the distance L. The longer the distance L is, the more preferable it is. This is because the longer the distance L is, the smaller the minimum detectable angle of rotation θ becomes, and hence, the smaller the error of the position of theHMP20 becomes.
The distances from the navigation sensors (S0and S1) to theIJ recording head24 are a and b, respectively. The distance a may be equal to the distance b, or may be zero (contacts the IJ recording head24). If only onenavigation sensor30 is mounted, the navigation sensor S0may be placed at any location around theIJ recording head24. Therefore, the illustrated position of the navigation sensor S0is just an example. However, a shorter distance between theIJ recording head24 and the navigation sensor S0makes it easier to reduce the size of the bottom face of theHMP20.
As illustrated inFIG. 9B, the distance from the edge of theIJ recording head24 to thefirst nozzle61 is d, and the distance between the adjacent nozzles is e. The values of a to e are stored in theROM28 or the like in advance.
Once theposition calculation circuit34 and the like has calculated the position of the navigation sensor S0, theposition calculation circuit34 can calculate the position of eachnozzle61 by using the distances a (or the distance b), the distance d, and the distance e.
<About Position ofHMP20 onPrint Medium12>
FIGS. 10A-10B are examples of diagrams illustrating a coordinate system of theHMP20 and a method for calculating the position. In the embodiment, the X-axis is taken in a direction horizontal to theprint medium12, and the Y-axis is taken in a direction vertical to theprint medium12. The origin is set at the position of the navigation sensor S0when an operation of image forming is started. The coordinates will be referred to as the “print medium coordinates”. In contrast, the navigation sensor S0outputs the amount of movement in axes of coordinates (X′-axis, Y′-axis) inFIGS. 10A-10B. In other words, the amount of movement is output by the coordinates where the Y′-axis is taken in a direction of the arrangednozzles61, and the X′-axis is taken in a direction perpendicular to the Y′-axis.
As illustrated inFIG. 9A, a case will be described in which theHMP20 is rotated clockwise by θ with respect to theprint medium12. Since it is difficult for the user to perform a scanning operation of theHMP20 with no inclination at all with respect to the print medium coordinates, it is natural to consider that non-zero8 is generated inevitably. If there is no rotation, the axes are X=X′ and Y=Y′. However, if theHMP20 rotates by the angle of rotation θ with respect to theprint medium12, the output of the navigation sensor S0does not coincide with the actual position of theHMP20 on theprint medium12. The angle of rotation θ is positive in the clockwise direction, X and X′ are positive in the rightward direction, and Y and Y′ are positive in the upward direction.
FIG. 10A is an example of the diagram illustrating the X-coordinate of theHMP20.FIG. 10A illustrates the correspondence between the amount of movement (X, Y) and (ΔX′, ΔY′) detected by the navigation sensor S0when theHMP20 rotated by the angle of rotation θ has moved only in the X-direction while keeping the same angle of rotation θ. Note that if twonavigation sensors30 are mounted, the output (the amount of movement) of the twonavigation sensors30 is the same because the relative positions are fixed. The X-coordinate of the navigation sensor S0is X1+X2, and X1+X2 can be calculated from ΔX′, ΔY′, and the angle of rotation θ.
FIG. 10B illustrates the correspondence between the amount of movement (X, Y) and (ΔX′, ΔY′) detected by the navigation sensor S0when theHMP20 rotated by the angle of rotation θ has moved only in the Y-direction while keeping the same angle of rotation θ. The Y-coordinate of the navigation sensor S0is Y1+Y2, and Y1+Y2 can be calculated from −ΔX′, ΔY′, and the angle of rotation θ.
Therefore, if theHMP20 has moved in the X-direction and the Y-direction while keeping the same angle of rotation θ, ΔX′ and ΔY′ output by the navigation sensor S0can be converted into X and Y in the print medium coordinates by the following formulas.
X=ΔX′ cos θ+ΔY′ sin θ  (1)
Y=−ΔX′ sin θ+ΔY′ cos θ  (2)
<<Detection of Angle of Rotation θ>>
In the embodiment, theposition calculation circuit34 calculates the angle of rotation θ based on the output of thegyro sensor31. However, in order to show that the position can be obtained by higher precision with a longer distance L, a method for calculating the angle of rotation θ will be described in the case where twonavigation sensors30 are mounted.
FIG. 11 is an example of a diagram illustrating a method for calculating the angle of rotation dθ of theHMP20 generated during image forming. The angle of rotation dθ is calculated using the amount of movement ΔX′ detected by the two navigation sensors S0-S1. Here, ΔX′0 represents the amount of movement detected by the upper navigation sensor S0on theprint medium12, and ΔX′1 represents the amount of movement detected by the lower navigation sensor S1. Note that inFIG. 11, θ represents the angle of rotation that has been already obtained.
If theHMP20 moves horizontally while rotating by dθ, the amounts of movement ΔX′0 and ΔX′1 are not the same. However, since both ΔX′0 and ΔX′1 are output as the amounts of movement in the direction perpendicular to the line connecting the two navigation sensors S0-S1, the difference between the amounts of movement ΔX′0 and ΔX′1 can be calculated by ΔX′0-ΔX′1. This difference is generated due to the rotation dθ of theHMP20. Also, since “ΔX′0-ΔX′1”, L, and dθ have a relationship as illustrated inFIG. 11, dθ can be represented by the following formula.
dθ=arcsin {(ΔX′0-ΔX′1)/L}  (3)
Theposition calculation circuit34 can calculate the angle of rotation θ by adding up dθ. As illustrated in formulas (1)-(2), since the angle of rotation θ is used for calculating the position, the angle of rotation θ affects the precision of the position. Also, as can be seen from formula (3), it is preferable to make the distance L greater for detecting dθ by a smaller value. Thus, the distance L affects the precision of the position, but a greater distance L makes the base area of theHMP20 larger and the imageformable area501 smaller.
Next, a method of calculation of the angle of rotation θ by using the output of thegyro sensor31 will be described. The output of thegyro sensor31 is the angular velocity ω represented by ω=dθ/dt where dt is assumed to be the ampling cycle. Therefore, the angle of rotation dθ can be represented by the following formula.
dθ=ω×dt
Consequently, the angle of rotation θ at time t=0 to N is represented by the following formula.
In this way, the angle of rotation θ can be obtained by thegyro sensor31. As represented by formulas (1)-(2), the position can be calculated using the angle of rotation θ. Once the position of the navigation sensor S0has been calculated, theposition calculation circuit34 can calculate the coordinates of each of thenozzles61 by using the values of a to e illustrated inFIG. 9B. Note that since X in formula (1) and Y in formula (2) are the amounts of change in a sampling cycle, the current position is obtained by accumulating these X and Y, respectively.
<Targeting Discharge Position>
Next, the targeting discharge position will be described usingFIG. 12.FIG. 12 is an example of a diagram illustrating a relationship between the targeting discharge positions and the positions of thenozzles61. The targeting discharge positions G1-G9 are targeting positions of theHMP20 at which impact of droplets of the ink will be given (at which pixels will be formed). The targeting discharge positions G1-G9 can be calculated from the initial position of theHMP20, and the resolutions in the X-axis and the Y-axis directions of theHMP20 represented by (Xdpi, Ydpi).
For example, if the resolution is 300 dpi, the targeting discharge positions are set in the longitudinal direction of theIJ recording head24 and in the perpendicular direction with the interval of approximately 0.084 mm between the targeting discharge positions. If there is a pixel to be discharged among the targeting discharge positions G1-G9, theHMP20 discharges the ink on the pixel.
However, since it is difficult in practice to catch the timing at which a targeting discharge position exactly coincides with the position of thenozzle61, theHMP20 provides apermissible error62 between the targeting discharge position and the current position of thenozzle61. If the current position of thenozzle61 comes within thepermissible error62 with respect to the targeting discharge position, theHMP20 discharges the ink from thenozzle61. Providing such a permissible range is to determine whether to discharge the ink or not from anozzle61.
Also, as designated by anarrow63, theHMP20 monitors the direction of movement and acceleration of thenozzle61, and predicts the position of thenozzle61 at the timing of the next discharge. Therefore, by comparing the predicted position with the range of thepermissible error62, theHMP20 can prepare for the next discharge of the ink.
<Operational Steps>
FIG. 13 is an example of a flowchart illustrating operational steps of the imagedata output device11 and theHMP20. First, the user presses a power button of the image data output device11 (Step U101). In response to the pressing operation, the imagedata output device11 receives power supply from a battery or the like to be activated.
The user selects a desired image to be output on the image data output device11 (Step U102). The imagedata output device11 receives the selection of an image. Document data of software such as a word processor application may be selected as the image, or image data such as JPEG may be selected. The printer driver may change data other than image data into an image if necessary.
The user performs an operation to print the selected image by the HMP20 (Step U103). TheHMP20 receives a request for executing the print job. In response to the request for the print job, the image data is transmitted to theHMP20.
The user grips theHMP20 and determines the initial position on the print medium12 (for example, a notebook) (Step U104).
Then, the user presses a print start button of the HMP20 (Step U105). TheHMP20 receives the press on the print start button.
The user makes scanning movement of theHMP20 by freely sliding theHMP20 on the print medium12 (Step U106).
Next, operations of theHMP20 will be described. The following operations are executed by theCPU33 running the firmware.
TheHMP20 is also activated by the power turned on. TheCPU33 of theHMP20 initializes the hardware elements inFIGS. 3 and 4 that are built in the HMP20 (Step S101). For example, theCPU33 initializes registers of the navigation sensor I/F42 and the gyro sensor I/F45, and sets a timing value in the print/sense timing generator43. Also, theCPU33 establishes communication between theHMP20 and the imagedata output device11.
TheCPU33 of theHMP20 determines whether the initialization has been completed, and if not completed, repeats this determination (Step S102).
Once the initialization has been completed (YES at S102), theCPU33 of theHMP20 indicates to the user that theHMP20 is in a state ready for printing, for example, by turning on the LED of the OPU26 (Step S103). Thereby, the user grasps that theHMP20 is in a state ready for printing, and makes the request for executing the print job as described above.
In response to the request for executing the print job, the communication I/F27 of theHMP20 receives image data input from the imagedata output device11, and indicates to the user that the image has been input, for example, by blinking the LED of the OPU26 (Step S104).
When the user has determined the initial position of theHMP20 on theprint medium12 and has pressed the print start button, theOPU26 of theHMP20 receives this operation, and theCPU33 makes the navigation sensor I/F42 read the position (the amount of movement) (Step S105). Then, the navigation sensor I/F42 communicates with the navigation sensor S0, obtains the amount of movement detected by the navigation sensor S0, and stores the amount in the register or the like (Step S1001). TheCPU33 reads out the amount of movement from the navigation sensor I/F42.
The amount of movement obtained right after the user pressed the print start button is usually zero, and even if it is not actually zero, theCPU33 stores the value, for example, in theDRAM29 or registers of theCPU33, as the initial position represented by the coordinates (0, 0) (Step S106).
Also, the print/sense timing generator43 starts generating timing after having obtained the initial position (Step S107). When an obtaining timing of the amount of movement of the navigation sensor S0set by the initialization comes, the print/sense timing generator43 indicates the timing to the gyro sensor I/F45 and the navigation sensor I/F42. This is performed periodically, which is the sampling cycle described above.
TheCPU33 of theHMP20 determines whether it is a timing to obtain information about the amount of movement and the angular velocity (Step S108). This determination is performed in response to an indication from the interruptcontroller41, but theCPU33 may count the time in the same way as the print/sense timing generator43 so as to determine the timing by itself.
When the timing comes to obtain information about the amount of movement and the angular velocity, theCPU33 of theHMP20 obtains the amount of movement from the navigation sensor I/F42 and obtains the angular velocity information from the gyro sensor I/F45 (Step S109). As described above, the gyro sensor I/F45 has obtained the angular velocity information from thegyro sensor31 at the timing generated by the print/sense timing generator43, and the navigation sensor I/F42 has obtained the amount of movement from the navigation sensor S0at the timing generated by the print/sense timing generator43.
Next, theposition calculation circuit34 calculates the current position of the navigation sensor S0by using the angular velocity information and the amount of movement (Step5110). Specifically, theposition calculation circuit34 calculates the current position of the navigation sensor S0by adding the position (X, Y) calculated in the previous cycle, and the moved distance calculated from the amount of movement (ΔX′, ΔY′) and the angular velocity information obtained this time. If only the initial position is available and there is no previously calculated position, theposition calculation circuit34 calculates the current position of the navigation sensor S0by adding the initial position, and the moved distance calculated from the amount of movement (ΔX′, ΔY′) and the angular velocity information obtained this time.
Next, theposition calculation circuit34 calculates the current position of each of thenozzles61 by using the current position of the navigation sensor S0(Step S111).
In this way, since the angular velocity information and the amount of movement are obtained by the print/sense timing generator43 at virtually the same time, the positions of thenozzles61 can be calculated by the angle of rotation and the amount of movement obtained at the timing when the angle of rotation has been detected. Therefore, the precision of the positions of thenozzles61 is hard to decrease even if the positions of thenozzles61 are calculated by the information obtained by different types of sensors.
Next, theCPU33 controls theDMAC38 and transmits image data of peripheral images around thenozzles61 from theDRAM29 to theImage RAM37, based on the calculated positions of the nozzles61 (Step S112). Note that therotator39 rotates the image depending on the head position specified by the user (the way of gripping theHMP20, and the like) and the inclination of theIJ recording head24.
Next, the IJrecording head controller44 compares position coordinates of each pixel constituting the peripheral image with the position coordinates of the nozzles61 (Step S113). Theposition calculation circuit34 calculates the acceleration of thenozzles61 by using the past positions and the current positions of thenozzles61. This makes it possible for theposition calculation circuit34 to calculate the positions of thenozzles61 every ink discharge cycle of theIJ recording head24, which is shorter than the cycle for the navigation sensor I/F42 to obtain the amount of movement and the cycle for the gyro sensor I/F45 to obtain the angular velocity information.
The IJrecording head controller44 determines whether the position coordinates of an image element is included in a predetermined range from the position of thenozzle61 calculated by the position calculation circuit34 (Step S114).
If the discharge condition is not satisfied, the process returns to Step S108. If the discharge condition is satisfied, the IJrecording head controller44 outputs data of the image element for each of thenozzles61 to the IJ recording head drive circuit23 (Step S115). Thus, the ink is discharged onto theprint medium12.
Next, theCPU33 determines whether the whole image data has been output (Step S116). If the whole image data has not been output, Steps S108 to S115 are repeated.
If the whole image data has been output, theCPU33 indicates to the user that the printing has been completed, for example, by blinking the LED of the OPU26 (Step S117).
Note that if the user judges that a sufficient image has been obtained without outputting the whole data, the user may press a print completion button, which is received by theOPU26 to end printing. After having ended the printing, the user may turn off the power, or the power may be set turned off automatically when the printing has completed.
<Image Formable Area in Case of Single Navigation Sensor>
The imageformable area501 will be described in the case of thesingle navigation sensor30 by usingFIGS. 14A to 15F.FIGS. 14A-14F are examples of comparative diagrams illustrating the imageformable areas501 in the case of the twonavigation sensors30.
The twonavigation sensors30 are arranged in parallel with thenozzles61 inFIG. 14A. Note thatFIG. 14A is a diagram of theHMP20 viewed from the upside.FIG. 14B illustrates apolygon502 formed by parts (the navigation sensors S0-S1and the nozzles61) that have to be positioned on theprint medium12 with the arrangement inFIG. 14A. Thepolygon502 is an example illustrating the size of the base. Assume that the interval between thenozzles61 and the navigation sensors S0-S1forming thepolygon502 is A mm, and the interval between the lower end of thenozzles61 and the lower navigation sensor S1is B mm.
FIG. 14C illustrates the imageformable area501 for the arrangement inFIG. 14A. Since thenozzles61 are positioned on the left end and on the upper end of thepolygon502, thenozzles61 can form an image from the upper left end of theprint medium12. In contrast, since thenozzles61 have the interval of A mm to the navigation sensors S0-S1on the right side, thenozzles61 cannot be moved in the right direction beyond the right end of the print medium12 (cannot form an image). Therefore, the right end of the imageformable area501 is located on the line distant from the right end of theprint medium12 by the length A mm. Similarly, since thenozzles61 has the interval of B mm to the lower navigation sensor S1, thenozzles61 cannot be moved in the lower direction beyond the lower end of the print medium12 (cannot form an image). Therefore, the lower end of the imageformable area501 is located on the line distant from the lower end of theprint medium12 by the length B mm.
The two navigation sensors S0-S1are arranged above and below thenozzles61 in series inFIG. 14D.FIG. 14E illustrates apolygon502 formed by parts (the navigation sensors S0-S1and the nozzles61) that have to be positioned on theprint medium12 with the arrangement inFIG. 14D. Assume that the interval between the navigation sensors S0-S1forming thepolygon502 is A mm.
FIG. 14F illustrates the imageformable area501 for the arrangement inFIG. 14D. Since the vertical length of theprint medium12 is shorter than the length A mm, theHMP20 cannot form an image on theprint medium12. Even if theHMP20 is rotated by 90° so that the navigation sensors S0-S1are arranged in parallel with the lateral direction of theprint medium12, theHMP20 cannot form an image on theprint medium12 because the lateral length of theprint medium12 is shorter than the length A mm. Therefore, there is no image formablearea501 inFIG. 14F.
In this way, if at least one of the twonavigation sensors30 sticks out of theprint medium12, theHMP20 cannot detect the position or may detect the position but not precisely. As such, a large size of the base part of the HMP has brought inconvenience that limits the imageformable area501 on theprint medium12. Also, even if the twonavigation sensors30 are on theprint medium12, theHMP20 cannot form an image when theIJ recording head24 goes out of theprint medium12, naturally. Therefore, the twonavigation sensors30 and theIJ recording head24 need to be positioned inside of theprint medium12, and the imageformable area501 is limited accordingly. Therefore, it might be difficult for the user to use the space on theprint medium12 widely for forming an image with aconventional HMP20.
FIGS. 15A-15F are examples of diagrams illustrating imageformable areas501 in the case of onenavigation sensor30. Onenavigation sensor30 is arranged in series with thenozzles61 inFIG. 15A. In other words, the navigation sensor S0is placed adjacent to, and below the nozzles will61 as close as possible. Note thatFIG. 15A is a diagram of theHMP20 viewed from the upside.FIG. 15B illustrates apolygon502 formed by parts (the navigation sensor S0and the nozzles61) that have to be positioned on theprint medium12 with the arrangement inFIG. 15A. Assume that the distance between the lower end of thenozzles61 and the navigation sensor S0forming thepolygon502 is A mm.
FIG. 15C illustrates the imageformable area501 for the arrangement inFIG. 15A. Since thepolygon502 has almost a linear shape, an image can be formed from the upper end of the left edge of theprint medium12 to the upper end of the right end of theprint medium12. However, since thenozzles61 has the interval of A mm to the navigation sensor S0below, thenozzles61 cannot be moved in the lower direction beyond the lower end of the print medium12 (cannot form an image). Therefore, the lower end of the imageformable area501 is located on the line distant from the lower end of theprint medium12 by the length A mm.
As is obvious by comparingFIGS. 14C and 14F withFIG. 15C, providing just onenavigation sensor30 greatly expands the imageformable area501. Especially, since the top half and more of theprint medium12 become the image formable area105, if theHMP20 is rotated by 180°, the bottom half of theprint media12 also becomes the imageformable area501. Therefore, the entire area of theprint medium12 virtually becomes the imageformable area501. Note that the navigation sensor S0may be arranged above theIJ recording head24 inFIG. 15A. In this case, the upper end of the imageformable area501 is located on the line distant from the upper end of theprint medium12 by the length A mm (the imageformable area501 inFIG. 15C turned upside down).
InFIG. 15D, one navigation sensor S0is arranged in the direction perpendicular to the direction of the arrayednozzles61. In other words, the navigation sensor S0is placed adjacent to the right side of thenozzle61 and close as much as possible.FIG. 15E illustrates apolygon502 formed by parts (the navigation sensor S0and the nozzles61) that have to be positioned on theprint medium12 with the arrangement inFIG. 15D. Assume that the distance between thenozzles61 and the navigation sensor S0forming thepolygon502 is A mm.
FIG. 15F illustrates the imageformable area501 for the arrangement inFIG. 15D. Since thenozzles61 are at the upper end and the left end of thepolygon502, and do not have the navigation sensor S0below, an image can be formed from the upper end to the lower end on the left side of theprint medium12. However, since thenozzles61 have the interval of A mm to the navigation sensors S0on the right, thenozzles61 cannot be moved in the right direction beyond the right end of the print medium12 (cannot form an image). Therefore, the right end of the imageformable area501 is located on the line distant from the right end of theprint medium12 by the length A mm.
As is obvious by comparingFIGS. 14C and 14F withFIG. 15F, providing just onenavigation sensor30 greatly expands the imageformable area501. Especially, since the left half and more of theprint medium12 become the image formable area105, if theHMP20 is rotated by 180°, the right half of theprint media12 also becomes the imageformable area501. Therefore, the entire area of theprint medium12 virtually becomes the imageformable area501. Note that the navigation sensor S0may be arranged on the left of theIJ recording head24 inFIG. 15D. In this case, the left end of the imageformable area501 is located on the line distant from the left end of theprint medium12 by the length A mm (the imageformable area501 inFIG. 15F flipped horizontally).
Also, as illustrated inFIG. 16, the navigation sensor S0may be mounted on theIJ recording head24.FIG. 16 is an example of a diagram illustrating an arrangement of the navigation sensor S0. The navigation sensor S0is mounted, for example, in a hole prepared in a bracket (a holding member) surrounding thenozzles61. Alternatively, if the navigation sensor S0does not need to be placed around the base, the navigation sensor S0may be mounted inside of the housing rather than on the base surface. In this case, since the navigation sensor S0and thenozzles61 do not need to be mounted on the same surface, the interval between the navigation sensor S0and thenozzles61 can be shortened.
<Example of Arrangement ofGyro Sensor31>
It has been known that thegyro sensor31 is preferably placed close to the rotational center. However, it is often the case that the rotational center of theHMP20 is located around the elbow of the user rather than at the center or the center of gravity of theHMP20. This is because the user performs a scanning operation of the HMP by using the elbow as the rotational center. Thereupon, thegyro sensor31 is preferably placed as illustrated inFIGS. 17A-17C.
FIG. 17A is an example of a diagram illustrating an arrangement of thegyro sensor31. Thegyro sensor31 is placed on the near side of the housing and at the center in the width direction of theHMP20. This is because the location is the closest to the user's elbow. In this placement, thegyro sensor31 is placed close to the rotational center, and hence, it is expected that precision of the angular velocity to be detected will increase. Note that the user may perform a scanning operation with theHMP20 in a state rotated 90° or 180°. Taking this point into consideration, it is preferable that one ormore gyro sensors31 are disposed on the edge of theHMP20 in a plan view from the upper side of theHMP20.
On the other hand, if assuming that theHMP20 is a rigid body, thegyro sensor31 is not necessarily limited to be placed on the near side of the housing, but may be placed anywhere on theHMP20 to obtain practically sufficient precision. However, the neighborhood of a part of theHMP20 that may be touched by the user may receive force to deform when the user performs a scanning operation with theHMP20. If such deformation is transferred to thegyro sensor31, noise may be mixed into the angular velocity.
Therefore, it is preferable that thegyro sensor31 is placed inside of theHMP20 in a state hard to be touched by the user and hard to be deformed easily. Specifically, thegyro sensor31 is mounted on a printedcircuit board70 near the base ofHMP20. Note that the printedcircuit board70 is a planner shaped part made of resin or the like, to have electronic components, integrated circuits (ICs), and metal wiring connecting the components mounted in high density. A printed board may be called a PWB (printed wiring board) or an electronic board. By having thegyro sensor31 placed in this way, even if theHMP20 deforms by force of an ordinary person, the deformation is hard to transfer to thegyro sensor31, and hence, it is possible to prevent noise from mixing into the angular velocity.
Also, it has been known that the angular velocity of thegyro sensor31 is affected by a change of the temperature. Therefore, it is the preferably placed in a place where the temperature does not change much in the housing. Circuits in which much current flows, such as a power supply or an LSI, for example, the SoC and the ASIC/FPGA, are heating elements that generate heat during operation. Therefore, the temperature changes much in the neighborhood of these circuits. Therefore it is preferable that thegyro sensor31 is placed away from theSoC50 and the ASIC/FPGA40 as much as possible as illustrated inFIG. 17B. InFIG. 17B, thegyro sensor31 is mounted on the printedcircuit board70 which is different from theSoC50 and the ASIC/FPGA40, to say the least.
Note that a person's hand also serves as a heat source. In this regard also, it is preferable that thegyro sensor31 is placed near the base surface.
Also, if the temperature characteristic is linear or can be investigated in advance, the temperature may be measured by a temperature sensor so as to correct the angular velocity depending on the temperature characteristic.
It is also effective to mount the navigation sensor S0and thegyro sensor31 on the same printedcircuit board70. InFIG. 17C, thegyro sensor31 and the navigation sensor S0are mounted on the single printedcircuit board70 placed on the near side of theHMP20. Having thegyro sensor31 and the navigation sensor S0positioned close to each other does not necessarily contribute to reducing the size of the base surface of theHMP20. However, if thegyro sensor31 is triaxial, such placement has advantages described below because it is possible to detect a change of the posture other than the angular velocity with respect to the axis perpendicular to theprint medium12.
FIG. 18 is an example of a diagram illustrating a posture of theHMP20 detected by a gyro sensor. If thegyro sensor31 is triaxial, the angular velocity can be detected in the yaw direction, the roll direction, and the pitch direction. Among these, the angular velocity in the yaw direction is used for calculating the position of the navigation sensor. If the angular velocity is detected in the roll direction and the pitch direction while forming an image, thenavigation sensor30 may be detached from theprint medium12. If the navigation sensor S0and thegyro sensor31 are mounted on the same printedcircuit board70, it can be considered that substantially the same angular velocity as the angular velocity in the roll direction and the pitch direction detected by thegyro sensor31 acts upon the navigation sensor S0. Therefore, it may be easy to detect the angular velocity generated in the roll direction and the pitch direction to the extent that the navigation sensor S0is detached from theprint medium12.
Therefore, if the navigation sensor S0and thegyro sensor31 are mounted on the same printedcircuit board70, using thetriaxial gyro sensor31 makes easier to detect whether the navigation sensor S0is detached from theprint medium12.
<The Treatment at the Time of Attachment of theGyro Sensor31>
Also, if thegyro sensor31 can detect the angular velocity triaxially, theHMP20 and the like can detect attachment precision when thegyro sensor31 has been attached to theHMP20. It is preferable that thegyro sensor31 is attached to be level with theHMP20 as precisely as possible. If that is the case, a change of the posture of theHMP20 in the yaw direction can be detected by only the angular velocity in the yaw direction. However, the level may be slightly shifted in the actual attachment. In such a case, even if the user rotates theHMP20 while keeping the horizontal level, the angular velocity may be detected in the role direction or the pitch direction.
To cope with this problem, the user may operate theHMP20 in a test mode or the like, and performs a scanning operation with theHMP20 while keeping the horizontal level. TheHMP20 may detect the angular velocity in the roller direction or the pitch direction, and determines how much thegyro sensor31 inclines with respect to theHMP20. Once the degree of the inclination becomes known, it is possible to correct the angular velocity in the yaw direction.
Also, if thegyro sensor31 is to detect the angular velocity just monoaxially (only in the yaw direction), a jig may be provided that can make movement starting from 0°, shifting to 90°, and returning to the 0°. While the user moves the jig starting from 0°, shifting to 90°, and returning to the 0°, theHMP20 detects the angle. If the detected angle does not match the 90° during the movement, theHMP20 calculates a correction coefficient to make the detected angle match 90°, and stores the coefficient in the device. Using this coefficient, the angular velocity can be corrected during an actual image formation operation.
Second Application ExampleWhen the user performs the scanning operation with theHMP20 on theprint medium12, the base of theHMP20 may slightly float over theprint medium12. In such a case, thenavigation sensor30 also floats, and hence, a position detected by thenavigation sensor30 becomes imprecise. An error of the position due to the floating may be negligible by itself, but the resolution of the amount of movement changes a lot.
The optical resolution of thenavigation sensor30 may be represented by CPI (Count Per Inch). This represents a number counted while thenavigation sensor30 moves by 1 inch, and the greater the number is, the higher the resolution is.
As illustrated inFIG. 19, the resolution of the amount of movement of thenavigation sensor30 changes depending on the distance between the surface of paper and thenavigation sensor30.FIG. 19 is an example of a diagram illustrating change of the distance between thenavigation sensor30 and the sheet, and the resolution of the amount of movement. It can be understood the greater the distance between the sensor and the paper becomes, the lower the resolution becomes. This is because thenavigation sensor30 optically detects edges on theprint medium12, and hence, a closer distance between the sensor and the paper is more advantageous for detecting the edges (easier to detect finer edges).
Considering such a property of thenavigation sensor30, a maker of theHMP20 sets the resolution in advance for detecting the amount of movement in the X and Y directions in accordance with the attached position of thenavigation sensor30 in theHMP20.
Therefore, if theHMP20 floats over theprint medium12, the measured amount of movement may deviate to be detected as a fewer amount of movement than the actual amount of movement, and hence, it is preferable to correct the deviation in a certain way. The present application example will describe anHMP20 that prevents the resolution of the amount of movement from being reduced when theHMP20 floats over theprint medium12.
FIG. 20A is a diagram illustrating an attached position of thenavigation sensor30 viewed from the upper side of theHMP20, andFIG. 20B is a diagram illustrating triaxial rotation of thenavigation sensor30.
As illustrated inFIG. 20B, theprint medium12 is laid on the X-Y plane, and the Z-axis is taken in the direction perpendicular to theprint medium12. As described in the application example 1, the positions of thenozzles61 are calculated by the angle of rotation around the Z-axis. On the other hand, how much thenavigation sensor30 floats over theprint medium12 is affected by at least one of the angle of rotation around the X-axis and the angle of rotation around the Y-axis. Note that it is not necessary to care about translation of theentire navigation sensor30 in the Z-axis direction. This is because it is unlikely that the user uses theHMP20 in such a way, and if theentire navigation sensor30 floats too much, it may be recognized as an error.
Assuming that the angle of rotation is zero both around the X-axis and around the Y-axis when printing is started, if thegyro sensor31 detects a nonzero angle of rotation around the X-axis (the Y-Z plane) or a nonzero angle of rotation around the Y-axis (the X-Z plane), theposition calculation circuit34 can detect that thenavigation sensor30 is floating over theprint medium12 during the scanning for printing.
Next, a method for calculating the amount of floating will be described with reference toFIGS. 21A-210.FIGS. 21A-210 are examples of diagrams illustrating the amount of floating of thenavigation sensor30 over theprint medium12.FIG. 21A is a side view of thenavigation sensor30 viewed in the Y-axis direction inFIG. 20A.FIG. 21B illustrates the amount of floating of thenavigation sensor30 when rotating (floating) around its left edge as the rotational axis, andFIG. 21C illustrates the amount of floating of thenavigation sensor30 when rotating (floating) around its right edge as the rotational axis.
Here, L1 represents the distance between thenavigation sensor30 and the left edge of the housing of theHMP20, and L2 represents the distance between thenavigation sensor30 and the right edge. Since the rotational direction around the Y-axis flips between rotation around the left edge and rotation around the right edge, theposition calculation circuit34 can determine the rotational direction (rotation around the left edge or rotation around the right edge) whether the angle of rotation is positive or negative. Theposition calculation circuit34 calculates the amount of floating as illustrated inFIG. 21B for the rotation around the left edge, or calculates the amount of floating as illustrated inFIG. 21C for the rotation around the right edge.
Here, α represents the angle of rotation detected by thegyro sensor31. Once the rotational direction has been determined, just the absolute value of a may be taken care of The absolute value may be sufficient as a after the rotational direction was judged. The amount of floating for the rotation around the left edge based onFIG. 21B is represented as follows.
L1 sin α
Similarly, the amount of floating for the rotation around the right edge based onFIG. 21C is represented as follows.
L2 sin α
The amount of floating as illustrated inFIGS. 21A-213 is generated by rotation around the Y-axis. In addition, theHMP20 may be rotated around the X-axis. Therefore, theposition calculation circuit34 also calculates the amount of floating generated by rotation around the X-axis. Thus, the overall amount of floating is calculated by the following formula as the sum of the amount of floating by rotation around the X-axis and the amount of floating by rotation around the Y-axis.
Amount of floating of thenavigation sensor30=(amount of floating by rotation around X-axis)+(amount of floating by rotation around Y-axis)
Note that thenavigation sensor30 that is floating may have moved not only in the height direction of thenavigation sensor30, but also in the lateral direction. The amount of movement is represented by “L1-L1 cos α” or “L2-L2 cos α”. However, since the amount of floating is small in an actual scanning operation of theHMP20, the amount itself may be negligible.
However, a decreased resolution of the amount of movement, which is caused by a changed distance between thenavigation sensor30 and theprint medium12 due to the floating as described above, cannot be disregarded. Since even if thenavigation sensor30 floats just slightly during a scanning operation, the resolution of the amount of movement output by thenavigation sensor30 changes (decreases) considerably such that the detected amount of movement is smaller than the actual amount of movement of thenavigation sensor30.
Thereupon, by using the proportional relationship between the distance between the sensor and the paper, and the amount of movement illustrated inFIG. 19, theposition calculation circuit34 converts the amount of floating into the change of the resolution, to correct the amount of movement. The correction will be described with a specific example.
Assume that the relationship between the distance between the sensor and the paper, and the amount of movement of thenavigation sensor30 is, for example, as follows:
resolution worth 4000 cpi if the distance between the sensor and the paper is 2 mm; and
resolution worth 3500 cpi if the distance between the sensor and the paper is 2.1 mm.
In other words, if the amount of floating changes from 2 mm to 2.1 mm, the resolution of the amount of movement decreases by 500 cpi. This means that the amount of movement output by thenavigation sensor30 is 157 counts for thenavigation sensor30 having moved by 1 mm when the distance between the print medium and the sensor is 2 mm, whereas the amount of movement output by thenavigation sensor30 decreases to 137 counts for thenavigation sensor30 having moved by 1 mm when the distance between the print medium and the sensor is changed to 2.1 mm. Thus, the calculated amount of movement is less than the actual amount of movement, and hence, the calculated position of thenavigation sensor30 is shifted from the actual position.
However, by having theHMP20 store the proportional relationship between the distance between the sensor and the paper, and the resolution of the amount of movement as illustrated inFIG. 19, theposition calculation circuit34 can estimate the resolution of the amount of movement from the amount of floating, to correct the position.
For example, the proportional relationship between the distance between the sensor and the paper, and the resolution of the amount of movement may be represented by an expression y=ax+b, where y represents the resolution of the amount of movement, x represents the distance between the sensor and the paper, and a and b are coefficients. The resolution of the amount of movement can be calculated from the amount of floating represented by x. First, divide the resolution of the amount of movement when the amount of floating is zero, by the calculated resolution of the amount of movement, and then, multiply the quotient by the amount of movement (counts) detected for the floatingnavigation sensor30. In this way, even if thenavigation sensor30 floats, the amount of movement can be corrected to a value to be obtained with the amount of floating being zero.
Note that since the proportional relationship of the resolution of the distance between the sensor and the paper, and the amount of movement may change depending on the type of paper, it is preferable to hold the proportional relationship of the resolution of the distance between the sensor and the paper for each of the types of paper.
FIG. 22 is an example of a flowchart illustrating operational steps of the imagedata output device11 and theHMP20.FIG. 22 will be described mainly in terms of differences withFIG. 13.
As the first difference, at Step S106, theCPU33 stores the initial angle (X, Y) of thegyro sensor31 along with the initial position represented by the coordinates (0, 0) in theDRAM29 or the registers of the CPU33 (Step S106). In other words, theCPU33 reads the angular velocity information of thegyro sensor31.
Then, at Step S110, when calculating the current position of the navigation sensor S0by using the angular velocity information and the amount of movement, theposition calculation circuit34 calculates the amount of floating based on the angles of rotation around the X-axis and around the Y-axis detected by thegyro sensor31, to correct the position ofnavigation sensor30.
As described above, according to the present application example, even if the resolution of the amount of movement changes due to the floatingnavigation sensor30, the position of thenavigation sensor30 can be corrected.
Other Application ExamplesAs above, most preferable embodiments have been described with the application examples. Note that the present invention is not limited to these embodiments and application examples, but various variations and modifications may be made without departing from the scope of the present invention.
For example, elements in theSoC50 and the ASIC/FPGA40 may be included in either of theSoC50 or the ASIC/FPGA40 depending on the CPU performance, the circuit size of the ASIC/FPGA40, and the like. Also, although the embodiments describe image forming in terms of discharging ink, image forming may be done with emitting visible light rays, ultraviolet rays, infrared rays, laser beams, and the like. In this case, for example, a material that reacts to heat or light may be used as theprint medium12. Also, transparent liquid may be discharged. In this case, visible information may be obtained if emitting light in a specific range of wavelengths. Also, metallic paste or resin may be discharged.
Also, although the embodiments described that thegyro sensor31 detects the posture on theprint medium12, the posture (orientation) in the horizontal direction can be detected by a geomagnetic sensor.
Also, the number of thegyro sensors31 to be disposed is not limited to one but may be two or more.
Note that the navigation sensor S0is an example of a moved amount detector, thegyro sensor31 is an example of a posture detector, and theposition calculation circuit34 is an example of a position calculator. The print/sense timing generator43 is the examples of a timing indicator, the IJrecording head controller44 is an example of a droplet discharger, and theHMP20 is an example of a droplet discharging apparatus. TheCPU33, theposition calculation circuit34, and thegyro sensor31 are an example of a floating amount calculator.
Also, an apparatus that has functions minimally required for calculating the position of theHMP20 is a position detection apparatus. For example, a position detection apparatus includes the navigation sensor S0, thegyro sensor31, theposition calculation circuit34, and theCPU33. In other words, anHMP20 that does not include functions required for image forming is a position detection apparatus. Also, an apparatus having a position detection apparatus is a mounted object, and theHMP20 is an example of the mounted object.
RELATED-ART DOCUMENTSPatent Documents[Patent Document 1] Japanese Unexamined Patent Application Publication (Translation of PCT Application) No. 2010-522650
The present application claims priority under 35 U.S.C. § 119 of Japanese Patent Application No. 2016-053538 filed on Mar. 17, 2016, and Japanese Patent Application No. 2016-251726 filed on Dec. 26, 2016, the entire contents of which are hereby incorporated by reference.