CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority to European Patent Application No. 04291918.3, filed on Jul. 27, 2004 and incorporated herein by reference. This application is related to co-pending and commonly assigned applications Ser. No. ______ (Attorney Docket No. TI-38581 (1962-22200)), entitled, “Emulating A Direct Memory Access Controller,” and Ser. No. ______ (Attorney Docket No. TI-38584 (1962-22500), entitled, “Interrupt Management In Dual Core Processors,” which are incorporated by reference herein.
BACKGROUND Many systems comprise dual processor cores. One of these processor cores is typically designated to be the “host,” or main, processor. The other processor may be termed a “secondary” processor. While performing a series of tasks, the host processor may determine that delegating one or more tasks to the secondary processor would be expeditious, so that the host processor may allocate its resources for performing other tasks. In such a case, the host processor must program the secondary processor to perform the task or tasks that are to be delegated. For example, if the host processor delegates the execution of a particular algorithm to the secondary processor, the host processor must program the secondary processor to execute the algorithm. It is time-consuming and energy-consuming for a host processor to have to program the secondary processor.
BRIEF SUMMARY Disclosed herein is a technique for delegating tasks between multiple processor cores. An illustrative embodiment comprises an electronic device comprising a first processor and a second processor, the second processor coupled to the first processor and adapted to receive an address from the first processor, to pause execution of a first thread at a switch point, and to use the address to retrieve and execute a group of instructions in a second thread. Prior to executing the group of instructions in the second thread, the second processor pushes onto a hardware-controlled stack data pertaining to the switch point, the data comprising information needed to resume execution of the first thread at the switch point.
Another illustrative embodiment comprises a processor that comprises decode logic adapted to receive from another processor an address of a group of instructions. The processor also comprises fetch logic coupled to the decode logic and adapted to fetch the group of instructions from storage. The decode logic pauses processing of a first thread at a switch point and processes the group of instructions in a separate thread. Prior to processing the group of instructions, the processor pushes onto a hardware-controlled stack data pertaining to the switch point, the data comprising contents of registers used by the group of instructions.
Yet another illustrative embodiment comprises a method of delegating a task from a first processor to a second processor. The method comprises transferring an address of a group of instructions from the first processor to the second processor, pausing execution of a first thread in the second processor at a switch point, pushing data onto a stack, the data comprising contents of registers used by the group of instructions. The method further comprises retrieving the group of instructions using the address, executing the group of instructions in a second thread, and popping the data off of the stack and storing the data to the registers in the second processor.
NOTATION AND NOMENCLATURE Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . ”. Also, the term “couple” or “couples” is intended to mean either an indirect or direct connection. Thus, if a first device couples to a second device, that connection may be through a direct connection, or through an indirect connection via other devices and connections.
BRIEF DESCRIPTION OF THE DRAWINGS For a more detailed description of the preferred embodiments of the present invention, reference will now be made to the accompanying drawings, wherein:
FIG. 1 shows a diagram of a system in accordance with preferred embodiments of the invention and including a Java Stack Machine (“JSM”) and a Main Processor Unit (“MPU”), in accordance with embodiments of the invention;
FIG. 2 shows a block diagram of the JSM ofFIG. 1 in accordance with preferred embodiments of the invention;
FIG. 3 shows various registers used in the JSM ofFIGS. 1 and 2, in accordance with embodiments of the invention;
FIG. 4 shows the preferred operation of the JSM to include “micro-sequences,” in accordance with embodiments of the invention;
FIG. 5 shows an illustrative switching process between two execution threads, in accordance with a preferred embodiment of the invention;
FIG. 6 shows an illustrative32-bit instruction that may be incorporated into a micro-sequence, in accordance with a preferred embodiment of the invention;
FIG. 7 shows a flow diagram of the switching process ofFIG. 5, in accordance with embodiments of the invention;
FIG. 8 shows a flow diagram describing a delegation technique in accordance with a preferred embodiment of the invention; and
FIG. 9 shows the system described herein, in accordance with preferred embodiments of the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims, unless otherwise specified. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that embodiment.
Described herein is a technique by which a host processor may delegate a task to a secondary processor by simply sending a command and an address to the secondary processor. The command causes the secondary processor to use the address to locate and retrieve a group of instructions that has been pre-programmed into the secondary processor. Executing this group of instructions causes the secondary processor to perform whatever task the host processor delegated to the secondary processor. However, before the secondary processor executes the group of instructions, it must first stop what it is doing in a currently executing thread and must further “bookmark” its place in the currently executing thread. By bookmarking its place in the currently executing thread, the secondary processor can execute the group of instructions and then resume executing in the thread at the bookmarked location. Accordingly, a technique for bookmarking a spot in a thread and a technique for delegating tasks from the host processor to the secondary processor are now discussed in turn.
In the context of software code, a “thread” may be defined as a single stream of code execution. While executing a software program, a processor may switch from a first thread to a second thread in order to complete a particular task. For example, the first thread may comprise some stimulus (i.e., instruction) that, when executed by the processor, causes the processor to halt execution of the first thread and to begin execution of the second thread. The second thread may comprise the performance of some task by a different portion of the software program.
The point in the first thread at which the switch is made may be termed the “switch point.” When switching from the first thread to the second thread, the processor first “bookmarks” the switch point, so that when the processor has finished executing the second thread of code, it can resume execution in the first thread at the switch point.
In order to bookmark the switch point, the processor stores all information that pertains to the switch point (known as the “context” of the switch point). Such information includes all registers, the program counter, pointer to the stack, etc. The processor copies such information to memory and retrieves the information later to resume execution in the first thread at the switch point. Bookmarking the switch point is time-consuming and consumes power which may be in limited supply in, for example, a battery-operated device such as a mobile phone.
Processors that store to memory all information pertaining to the switch point unnecessarily spend time and power doing so. Whereas the aforementioned processors store all registers, the program counter, stack pointer, etc., the subject matter described herein is achieved at least in part by the realization that in many cases, fewer than all such information need be stored. For example, only three values are saved to sufficiently bookmark the switch point: the program counter (PC), a second program counter called the micro-program counter (μPC), discussed below, and a status register. Once the processor has finished executing the second thread, these three values provide sufficient information for the processor to find the switch point in the first thread and resume execution at that switch point.
Accordingly, described herein is a programmable electronic device, such as a processor, that is able to bookmark a switch point using a minimal amount of information pertaining to the switch point. A “minimal” amount of information generally comprises information in one or more registers, but not all registers, of a processor core. For example, in some embodiments, a “minimal” amount of information comprises a PC register, a μPC register and a status register. In other embodiments, a “minimal” amount of information comprises the PC register, the μPC register and the status register, as well as one or more additional registers, but less than all registers. In still other embodiments, a “minimal” amount of information comprises less than all registers. In yet other embodiments, a “minimal” amount of information consists of only the information (i.e., registers) necessary to bookmark a switch point, where the amount of information (i.e., number of registers) varies depending on the processor used and/or the software application being processed. In such cases, the “minimal” amount of information may simply be one register or may be all of the registers in the processor core. Instead of storing all switch point information to memory, the processor described herein pushes a minimal amount of switch point information onto a processor stack. Later, when the processor needs the switch point information, it pops the information off of the stack and uses the information to resume execution at the switch point. In this way, the time and power demands placed on the processor are reduced or even minimized, resulting in increased performance.
Some situations, however, require more than a minimal amount of information to be stored. For example, in these situations, a minimum amount of information may not be sufficient to properly bookmark a switch point. Accordingly, the disclosed processor is capable of bookmarking a switch point using a minimal amount of information (“minimal context store”) needed to resume execution at the switch point. The processor also is capable of bookmarking a switch point using more than a minimal amount of information (“full context store”), as described further below.
The processor described herein is particularly suited for executing Java™ Bytecodes or comparable code. As is well known, Java is particularly suited for embedded applications. Java is a stack-based language, meaning that a processor stack is heavily used when executing various instructions (e.g., Bytecodes), which instructions generally have a size of8 bits. Java is a relatively “dense” language meaning that on average each instruction may perform a large number of functions compared to various other instructions. The dense nature of Java is of particular benefit for portable, battery-operated devices that preferably include as little memory as possible to save space and power. The reason, however, for executing Java code is not material to this disclosure or the claims which follow. Further, the processor advantageously includes one or more features that permit the execution of the Java code to be accelerated.
Referring now toFIG. 1, asystem100 is shown in accordance with a preferred embodiment of the invention. As shown, the system includes at least twoprocessors102 and104.Processor102 is referred to for purposes of this disclosure as a Java Stack Machine (“JSM”) andprocessor104 may be referred to as a Main Processor Unit (“MPU”).System100 may also includememory106 coupled to both theJSM102 andMPU104 and thus accessible by both processors. At least a portion of thememory106 may be shared by both processors meaning that both processors may access the same shared memory locations. Further, if desired, a portion of thememory106 may be designated as private to one processor or the other.System100 also includes a Java Virtual Machine (“JVM”)108,compiler110, and adisplay114. TheMPU104 preferably includes an interface to one or more input/output (“I/O”) devices such as a keypad to permit a user to control various aspects of thesystem100. In addition, data streams may be received from the I/O space into theJSM102 to be processed by theJSM102. Other components (not specifically shown) may be included as desired for various applications.
As is generally well known, Java code comprises a plurality of “Bytecodes”112.Bytecodes112 may be provided to theJVM108, compiled bycompiler110 and provided to theJSM102 and/orMPU104 for execution therein. In accordance with a preferred embodiment of the invention, theJSM102 may execute at least some, and generally most, of the Java Bytecodes. When appropriate, however, theJSM102 may request theMPU104 to execute one or more Java Bytecodes not executed or executable by theJSM102. In addition to executing Java Bytecodes, theMPU104 also may execute non-Java instructions. TheMPU104 also hosts an operating system (“O/S”) (not specifically shown) which performs various functions including system memory management, the system task management that schedules theJVM108 and most or all other native tasks running on the system, management of thedisplay114, receiving input from input devices, etc. Without limitation, Java code may be used to perform any one of a variety of applications including multimedia, games or web based applications in thesystem100, while non-Java code, which may comprise the O/S and other native applications, may still run on the system on theMPU104.
TheJVM108 generally comprises a combination of software and hardware. The software may include thecompiler110 and the hardware may include theJSM102. The JVM may include a class loader, Bytecode verifier, garbage collector, and a Bytecode interpreter loop to interpret the Bytecodes that are not executed on theJSM processor102.
In accordance with preferred embodiments of the invention, theJSM102 may execute at least two types of instruction sets. One type of instruction set may comprise standard Java Bytecodes. As is well-known, Java is a stack-based programming language in which instructions generally target a stack. For example, an integer add (“IADD”) Java instruction pops two integers off the top of the stack, adds them together, and pushes the sum back on the stack. A “simple” Bytecode instruction is generally one in which theJSM102 may perform an immediate operation either in a single cycle (e.g., an “iadd” instruction) or in several cycles (e.g., “dup2_x2”). A “complex” Bytecode instruction is one in which several memory accesses may be required to be made within the JVM data structure for various verifications (e.g., NULL pointer, array boundaries). As will be described in further detail below, one or more of the complex Bytecodes may be replaced by a “micro-sequence” comprising various other instructions.
Another type of instruction set executed by theJSM102 may include instructions other than standard Java instructions. In accordance with at least some embodiments of the invention, the other instruction set may include register-based and memory-based operations to be performed. This other type of instruction set generally complements the Java instruction set and, accordingly, may be referred to as a complementary instruction set architecture (“C-ISA”). By complementary, it is meant that a complex Java Bytecode may be replaced by a “micro-sequence” comprising C-ISA instructions. The execution of Java may be made more efficient and run faster by replacing some sequences of Bytecodes by preferably shorter and more efficient sequences of C-ISA instructions. The two sets of instructions may be used in a complementary fashion to obtain satisfactory code density and efficiency. As such, theJSM102 generally comprises a stack-based architecture for efficient and accelerated execution of Java Bytecodes combined with a register-based architecture for executing register and memory based C-ISA instructions. Both architectures preferably are tightly combined and integrated through the C-ISA. Because various of the data structures described herein are generally JVM-dependent and thus may change from one JVM implementation to another, the software flexibility of the micro-sequence provides a mechanism for various JVM optimizations now known or later developed.
FIG. 2 shows an exemplary block diagram of theJSM102. As shown, the JSM includes acore120 coupled todata storage122 andinstruction storage130. The core may include one or more components as shown. Such components preferably include a plurality ofregisters140, three address generation units (“AGUs”)142,147, micro-translation lookaside buffers (micro-TLBs)144,156, amulti-entry micro-stack146, an arithmetic logic unit (“ALU”)148, amultiplier150, decodelogic152, and instruction fetchlogic154. In general, operands may be retrieved fromdata storage122 or from the micro-stack146 and processed by theALU148, while instructions may be fetched frominstruction storage130 by fetchlogic154 and decoded bydecode logic152. Theaddress generation unit142 may be used to calculate addresses based, at least in part, on data contained in theregisters140. TheAGUs142 may calculate addresses for C-ISA instructions. TheAGUs142 may support parallel data accesses for C-ISA instructions that perform array or other types of processing. TheAGU147 couples to the micro-stack146 and may manage overflow and underflow conditions in the micro-stack preferably in parallel. The micro-TLBs144,156 generally perform the function of a cache for the address translation and memory protection information bits that are preferably under the control of the operating system running on theMPU104. Thedecode logic152 comprisesauxiliary registers151.
Referring now toFIG. 3, theregisters140 may include16 registers designated as R0-R15. In some embodiments, registers R0-R5 and R8-R14 may be used as general purposes (“GP”) registers usable for any purpose by the programmer. Other registers, and some of the GP registers, may be used for specific functions. For example, in addition to use as a GP register, register R5 may be used to store the base address of a portion of memory in which Java local variables may be stored when used by the current Java method. The top of the micro-stack146 can be referenced by the values in registers R6 and R7. The top of the micro-stack146 has a matching address in external memory pointed to by register R6. The values contained in the micro-stack146 are the latest updated values, while their corresponding values in external memory may or may not be up to date. Register R7 provides the data value stored at the top of the micro-stack146. Register R15 may be used for status and control of theJSM102. At least one bit (called the “Micro-Sequence-Active” bit) in status register R15 is used to indicate whether theJSM102 is executing a simple instruction or a complex instruction through a micro-sequence. This bit controls, in particular, which program counter is used (PC or μPC) to fetch the next instruction, as will be explained below.
Referring again toFIG. 2, as noted above, theJSM102 is adapted to process and execute instructions from at least two instruction sets, at least one having instructions from a stack-based instruction set (e.g., Java). The stack-based instruction set may include Java Bytecodes. Unless empty, Java Bytecodes may pop data from and push data onto the micro-stack146. The micro-stack146 preferably comprises the top n entries of a larger stack that is implemented indata storage122. Although the value of n may vary in different embodiments, in accordance with at least some embodiments, the size n of the micro-stack may be the top eight entries in the larger, memory-based stack. The micro-stack146 preferably comprises a plurality of gates in thecore120 of theJSM102. By implementing the micro-stack146 in gates (e.g., registers) in thecore120 of theprocessor102, access to the data contained in the micro-stack146 is generally very fast, although any particular access speed is not a limitation on this disclosure.
TheALU148 adds, subtracts, and shifts data. Themultiplier150 may be used to multiply two values together in one or more cycles. The instruction fetchlogic154 generally fetches instructions frominstruction storage130. The instructions may be decoded bydecode logic152. Because theJSM102 is adapted to process instructions from at least two instruction sets, thedecode logic152 generally comprises at least two modes of operation, one mode for each instruction set. As such, thedecode logic unit152 may include a Java mode in which Java instructions-may be decoded and a C-ISA mode in which C-ISA instructions may be decoded.
Thedata storage122 generally comprises data cache (“D-cache”)124 and data random access memory (“DRAM”)126. Reference may be made to U.S. Pat. No. 6,826,652, filed Jun. 9, 2000 and U.S. Pat. No. 6,792,508, filed Jun. 9, 2000, both incorporated herein by reference. Reference also may be made to U.S. Ser. No. 09/932,794 (Publication No. 20020069332), filed Aug. 17, 2001 and incorporated herein by reference. The stack (excluding the micro-stack146), arrays and non-critical data may be stored in the D-cache124, while Java local variables, critical data and non-Java variables (e.g., C, C++) may be stored in D-RAM126. Theinstruction storage130 may comprise instruction RAM (“I-RAM”)132 and instruction cache (“I-cache”)134. The I-RAM132 may be used for “complex” micro-sequenced Bytecodes or micro-sequences, as described below. The I-cache134 may be used to store other types of Java Bytecode and mixed Java/C-ISA instructions.
As noted above, the C-ISA instructions generally complement the standard Java Bytecodes. For example, thecompiler110 may scan a series ofJava Bytecodes112 and replace a complex Bytecode with a micro-sequence as explained previously. The micro-sequence may be created to optimize the function(s) performed by the replaced complex Bytecodes.
FIG. 4 illustrates the operation of theJSM102 to replace Java Bytecodes with micro-sequences.FIG. 4 shows some, but not necessarily all, components of the JSM. In particular, theinstruction storage130, thedecode logic152, and a micro-sequence vector table162 are shown. Thedecode logic152 receives instructions from theinstruction storage130 and accesses the micro-sequence vector table162. In general and as described above, thedecode logic152 receives instructions (e.g., instructions170) frominstruction storage130 via instruction fetch logic154 (FIG. 2) and decodes the instructions to determine the type of instruction for subsequent processing and execution. In accordance with the preferred embodiments, theJSM102 either executes the Bytecode frominstructions170 or replaces a Bytecode frominstructions170 with a micro-sequence as described below.
The micro-sequence vector table162 may be implemented in thedecode logic152 or as separate logic in theJSM102. The micro-sequence vector table162 preferably includes a plurality ofentries164. Theentries164 may include one entry for each Bytecode that the JSM may receive. For example, if there are a total of256 Bytecodes, the micro-sequence vector table162 preferably comprises at least256 entries. Eachentry164 preferably includes at least two fields—afield166 and an associatedfield168.Field168 may comprise a single bit that indicates whether theinstruction170 is to be directly executed or whether the associatedfield166 contains a reference to a micro-sequence. For example, abit168 having a value of “0” (“not set”) may indicate thefield166 is invalid and thus, the corresponding Bytecode frominstructions170 is directly executable by the JSM.Bit168 having a value of “1” (“set”) may indicate that the associatedfield166 contains a reference to a micro-sequence.
If thebit168 indicates the associatedfield166 includes a reference to a micro-sequence, the reference may comprise the full starting address ininstruction storage130 of the micro-sequence or a part of the starting address that can be concatenated with a base address that may be programmable in the JSM. In the former case,field166 may provide as many address bits as are required to access the full memory space. In the latter case, a register within the JSM registers140 is programmed to hold the base address and the vector table162 may supply only the offset to access the start of the micro-sequence. Most or all JSMinternal registers140 and any other registers preferably are accessible by themain processor unit104 and, therefore, may be modified by the JVM as necessary. Although not required, this latter addressing technique may be preferred to reduce the number of bits needed withinfield166. At least aportion180 of theinstruction130 may be allocated for storage of micro-sequences and thus the starting address may point to a location inmicro-sequence storage130 at which a particular micro-sequence can be found. Theportion180 may be implemented in I-RAM132 shown above inFIG. 2.
Although the micro-sequence vector table162 may be loaded and modified in accordance with a variety of techniques, the following discussion includes a preferred technique. The vector table162 preferably comprises a JSM resource that is addressable via aregister140. Asingle entry164 or a block of entries within the vector table162 may be loaded by information from the data cache124 (FIG. 2). When loading multiple entries (e.g., all of the entries164) in the table162, a repeat loop of instructions may be executed. Prior to executing the repeat loop, a register (e.g., R0) preferably is loaded with the starting address of the block of memory containing the data to load into the table. Another register (e.g., R1) preferably is loaded with the size of the block to load into the table. Register R14 is loaded with the value that corresponds to the first entry in the vector table that is to be updated/loaded.
The repeated instruction loop preferably comprises two instructions that are repeated n times. The value n preferably is the value stored in register R1. The first instruction in the loop preferably performs a load from the start address of the block (R0) to the first entry in the vector table162. The second instruction in the loop preferably adds an “immediate” value to the block start address. The immediate value may be “2” if each entry in the vector table is16 bits wide. The loop repeats itself to load the desired portions of the total depending on the starting address.
In operation, thedecode logic152 uses a Bytecode frominstructions170 as an index into micro-sequence vector table162. Once thedecode logic152 locates the indexedentry164, thedecode logic152 examines the associatedbit168 to determine whether the Bytecode is to be replaced by a micro-sequence. If thebit168 indicates that the Bytecode can be directly processed and executed by the JSM, then the instruction is so executed. If, however, thebit168 indicates that the Bytecode is to be replaced by a micro-sequence, then thedecode logic152 preferably changes this instruction into a “no operation” (NOP) and sets the micro-sequence-active bit (described above) in the status register R15. In another embodiment, the JSM's pipe may be stalled to fetch and replace this micro-sequenced instruction by the first instruction of the micro-sequence. Changing the micro-sequenced Bytecode into a NOP while fetching the first instruction of the micro-sequence permits the JSM to process multi-cycle instructions that are further advanced in the pipe without additional latency. The micro-sequence-active bit may be set at any suitable time such as when the micro-sequence enters the JSM execution stage (not specifically shown).
As described above, theJSM102 implements two program counters-the PC and the μPC. The PC and the μPC are stored inauxiliary registers151, which in turn is stored in thedecode logic152. In accordance with a preferred embodiment, one of these two program counters is the active program counter used to fetch and decode instructions. ThePC186 may be the currently active program counter when thedecode logic152 encounters a Bytecode to be replaced by a micro-sequence. Setting the status register's micro-sequence-active bit causes themicro-program counter188 to become the active program counter instead of theprogram counter186. Also, the contents of thefield166 associated with the micro-sequenced Bytecode preferably are loaded into theμPC188. At this point, theJSM102 is ready to begin fetching and decoding the instructions comprising the micro-sequence. At or about the time the decode logic begins using theμPC188, thePC186 preferably is incremented by a suitable value to point thePC186 to the next instruction following the Bytecode that is replaced by the micro-sequence. In at least some embodiments, the micro-sequence-active bit within the status register R15 may only be changed when the first instruction of the micro-sequence enters the execute phase ofJSM102 pipe. The switch fromPC186 to theμPC188 preferably is effective immediately after the micro-sequenced instruction is decoded, thereby reducing the latency.
The micro-sequence may end with a predetermined value or Bytecode from the C-ISA called “RtuS” (return from micro-sequence) that indicates the end of the sequence. This C-ISA instruction causes a switch from theμPC188 to thePC186 upon completion of the micro-sequence. Preferably, thePC186 previously was incremented, as discussed above, so that the value of thePC186 points to the next instruction to be decoded. The instruction may have a delayed effect or an immediate effect depending on the embodiment that is implemented. In embodiments with an immediate effect, the switch from theμPC188 to thePC186 is performed immediately after the instruction is decoded and the instruction after the RtuS instruction is the instruction pointed to by the address present in thePC186.
As discussed above, one or more Bytecodes may be replaced with a micro-sequence or a group of other instructions. Such replacement instructions may comprise any suitable instructions for the particular application and situation at hand. At least some such suitable instructions are disclosed in U.S. Ser. No. 10/631,308 (Publication No. 20040024989), filed Jul. 31, 2003 and incorporated herein by reference.
Replacement micro-sequence instructions also may be used to bookmark switch points when switching code execution threads. Referring toFIG. 5, the line marked “T1” denotes a first thread T1 that is processed by theJSM102. The thread T1 comprises a plurality of Bytecode instructions, a plurality of micro-sequence instructions, or some combination thereof. As previously explained, the instructions that are executed in thread T1 are retrieved from theinstruction storage130. More specifically, Bytecodes are retrieved from theBytecode storage170 and micro-sequence instructions are retrieved frommicro-sequence storage180.
While processing thread T1, thedecode logic152 may encounter a sequence of JSM instructions that causes the processing of thread Ti to be paused and the processing of a separate thread T2 to be initialized. This sequence is executed in thread T1 at or immediately prior to switchpoint502. Execution of this sequence causes processing of thread T1 to stop, and processing of a separate thread T2 (denoted by line “T2”) to begin in order to perform some separate task in thread T2. In some embodiments, instead of comprising a sequence of instructions (hereinafter referred to as “switch instructions”) that explicitly performs a thread switch, thread T1 may comprise a sequence of instructions that calls an operating system (OS) call (e.g., threadyield ( ) ), which OS call selects one of a plurality of threads to execute based on thread priorities as dictated by the OS. A thread switch also may be directly initialized by the OS. Specifically, if the OS is running on theMPU104, the OS may use a sequence of MPU commands to initialize the thread switch.
Before theJSM102 switches from processing thread T1 to processing thread T2, however, information pertaining to the switch point502 (i.e., “context” information) is stored by being pushed onto a T1 stack123 (e.g., a memory-based stack designated specifically for thread T1 and stored instorage122,FIG. 2) of theJSM102. In some embodiments, the context information may be pushed onto the micro-stack146. The use of the term “hardware-controlled stack” below and/or in the claims may refer to the micro-stack146, theT1 stack123 or theT2 stack125, whichT1 stack123 andT2 stack125 may be used as a micro-stack (e.g., like micro-stack146). Although the embodiments below are discussed in terms of theT1 stack123 and/or theT2 stack125, the scope of disclosure is not limited to the use of these particular stacks and other stacks (e.g., micro-stack146) may be substituted for theT1 stack123 and/or theT2 stack125. Further, in preferred embodiments, the context information is a minimal amount of information, as described below.
Context information that is collected preferably comprises the values of thePC186,μPC188 and status register (register R15) as they are at theswitch point502. When thedecode logic152 encounters a sequence of switch instructions while processing thread T1, the sequence causes the execution of thread T1 to be halted atswitch point502, the context ofswitch point502 to be saved, and the execution of thread T2 to be initialized. In some embodiments, commands sent from theMPU104 may perform a function similar to that of a sequence of switch instructions.
Regardless of whether a switch from thread T1 to thread T2 is initialized by code in thread T1 or commands received from theMPU104, the switching processes are similar. As described above, the execution of thread T1 is first halted. Once theJSM102 has stopped processing thread T1, theJSM102 is made to store the context of theswitch point502. The context of theswitch point502 preferably comprises the minimum amount of information necessary for theJSM102 to resume processing thread T1 atswitch point502 after theJSM102 has finished processing thread T2. TheJSM102 stores the context of theswitch point502 by retrieving thePC186 and theμPC188 from theauxiliary registers151 and pushing them onto theT1 stack123. TheJSM102 also retrieves the value of the status register R15 and pushes that value onto theT1 stack123 as well. These three values—thePC186, theμPC188 and the status register R15—together comprise the minimum amount of information needed for theJSM102 to resume processing thread T1 atswitch point502 after processing thread T2.
However, in some embodiments, it is preferable to also store a fourth value for efficiency purposes. Accordingly, theJSM102 pushes a fourth value onto theT1 stack123, where the fourth value is variable. For example, the fourth value may be one of theregisters140. The scope of disclosure is not limited to pushing thePC186,μPC188, status register and variable register onto the stack in any particular order, nor is the scope of disclosure limited to pushing these particular values onto the stack. As described above, any suitable number of values (e.g., a minimum amount of information) may be pushed onto the stack to store a context.
In some embodiments, the switch instructions in the thread T1 may be 32-bit instructions that, when executed, call a subroutine or some other portion of code comprising instructions that store the context of theswitch point502 by pushing context values (e.g., PC, μPC, status register) onto theT1 stack123.FIG. 6 shows an illustrative embodiment of such 32-bit instructions. Specifically,FIG. 6 shows a 32-bit instruction599 that comprises information that describes the class of the 32-bit instruction599 and further specifies the type of the instruction599. For example, as shown in the figure, bits31:28 describe the class of the instruction and bits27:24 and bits3:0 describe the particular type of instruction being used. Bits27:24 and bits3:0 may specify, for example, that the instruction is a minimum-context push instruction which, when executed, causes various context values to be pushed onto the stack, as described above. Bits23:2 are not of significance and preferably do not contain arguments or other relevant data. Instead, bits23:2 may contain placeholder values (e.g., “0” bits). The scope of disclosure is not limited to the use of instructions as shown inFIG. 6. Context values also may be pushed onto theT1 stack123 by commands received from theMPU104.
Each thread has its own RAM base address for storing local variables used by that thread. The micro-sequence may contain instructions that, when executed, cause theJSM102 to clean and invalidate theDRAM126 to save the local variables being used by thread T1. More specifically, at least some of the contents of theDRAM126 preferably are transferred to other areas of thestorage122, such as another DRAM (not specifically shown) that may be located in thestorage122. TheDRAM126 then is invalidated to clear space in theDRAM126 for local variables that are used by thread T2. After theDRAM126 has been cleaned and invalidated, theJSM102 also may push the RAM base address onto the main stack, so that the local variables used by thread T1 may be retrieved for later use. Also, because each thread pushes and pops different values onto the micro-stack146, theJSM102 may further clean and invalidate the micro-stack146 in order to preserve the entries of the micro-stack146 and to clear the micro-stack146 for use by thread T2. In at least some embodiments, the entries of the micro-stack146 may be copied and/or transferred to thedata cache124. Further, in some embodiments, theJSM102 may invalidate the current entries of the micro-stack146, so that after a thread switch, the entries loaded into the micro-stack146 replace the invalidated entries.
After thePC186, theμPC188, the status register R15 and an optional fourth register have been pushed onto theT1 stack123, theJSM102 stores the stack pointer (i.e., register R6). The stack pointer may be defined as the address of the topmost entry onT1 stack123 and may be stored in any suitable memory (e.g., storage122). Once at least thePC186,μPC188, and the status register have been pushed onto theT1 stack123, and once the stack pointer for theT1 stack123 has been stored in memory, the context ofswitch point502 has been stored.
Because the context has been stored, theJSM102 is ready to switch from thread T1 to thread T2. Similar to thread T1, thread T2 comprises a plurality of instructions (e.g., Bytecodes, micro-sequences or a combination thereof). Like thread T1, thread T2 may be executed multiple times. However, each time processing switches from thread T1 to thread T2, as the context of thread T1 is stored from theJSM102 onto a stack, so should the context of thread T2 be loaded from a stack onto theJSM102. The context of thread T2 may be found on top of theT2 stack125. TheT2 stack125 preferably is a memory-based stack, specifically designated for thread T2 and stored in thestorage122. The thread T2 context may have been pushed onto theT2 stack125 at the end of a previous iteration in a substantially similar fashion to the context-saving process described above in relation to thread T1, or, alternatively, the thread T2 context may have been pushed onto theT2 stack125 during the creation of the thread T2. It also may have occurred during the last thread switch of thread T2.
Thus, to begin processing thread T2, theJSM102 loads the stack pointer forT2 stack125 from thestorage122 to register R6. The RAM base address is loaded from theT2 stack125, thus loading the local variables for thread T2. TheJSM102 also loads the context of thread T2 from theT2 stack125 onto theauxiliary registers151 and/or theregisters140. In particular, theJSM102 uses specific instructions to pop context values off of theT2 stack125, where at least some of the specific instructions are indivisible. For example, a MCTXPOP instruction may be used to pop minimum context values off of theT2 stack125. This MCTXPOP instruction, in at least some embodiments, is indivisible, mandatory for performing a context switch, and should not be preempted. In this way, theJSM102 is initialized to the context of the previous iteration of thread T2. Thus, theJSM102 effectively is able to resume processing where it “left off.” TheJSM102 decodes and executes thread T2 in a similar fashion to thread T1.
After thread T2 has been executed, theJSM102 may resume processing thread T1 atswitch point502. To resume processing thread T1 atswitch point502, theJSM102 loads the context information of thread T1 from theT1 stack123. TheJSM102 loads the stack pointer of thread T1 from thestorage122 and into register R6. TheJSM102 then pops the RAM base address off of theT1 stack123 and uses the RAM base address to load the local variables for thread T1. TheJSM102 also pops the status value,μPC188 and thePC186 off of theT1 stack123. TheJSM102 stores the status value to the register R15 and stores theμPC188 and thePC186 to the auxiliary registers151. In this way, the context information that is stored on top of theT1 stack123 is popped off the stack and is used by theJSM102 to return to the context ofswitch point502. TheJSM102 may now resume processing thread T1 atswitch point502. The thread switch from thread T2 to thread T1 may be controlled by a sequence of code being executed in thread T2 or, alternatively, by commands sent from theMPU104. The thread switching technique described above may be applied to any suitable pair of threads in thesystem100.
FIG. 7 shows a flowchart summarizing the process used to switch from one thread to another thread. Theprocess600 may begin by processing thread T1 (block602). Theprocess600 comprises monitoring for a sequence of code in thread T1, or commands from theMPU104, that initialize a thread switch from thread T1 to thread T2 (block604). If no such sequence is encountered or no such command is received from theMPU104, theprocess600 comprises continuing to process thread T1 (block602). However, if such a sequence orMPU104 command is encountered, then theprocess600 comprises halting processing of thread T1 (block606) and pushing either the full or minimum context to the T1 stack (block608), as previously described.
Theprocess600 further comprises cleaning and invalidating the RAM (block610), pushing the RAM base address onto the T1 stack (block612), cleaning and invalidating the micro-stack (block614), and storing the T1 stack pointer to any suitable memory (block616). The context of thread T1 has now been saved. Before beginning to process thread T2, the context of thread T2 (if any) is to be loaded from the T2 stack. Specifically, theprocess600 comprises loading the T2 stack pointer from memory (block618), popping the RAM base address from the T2 stack (block620), popping the full or minimum context from the T2 stack (block622), and subsequently beginning processing of the thread T2 (block624).
In the embodiments described above, a minimum context (i.e., PC, μPC, status register) is pushed onto a stack to bookmark a switch point. While storing the minimum-context is faster than storing the full-context (i.e., all registers in the JSM core), and storing the minimum context onto the stack is faster than moving registers from theJSM102 to the D-RAM126, in some embodiments, it may be desirable to perform a full-context store instead of a minimum-context store, for reasons previously described. Thus, in such embodiments, full contexts also may be stored and/or loaded, in which case most or all of theregisters140 as well as most or all of theauxiliary registers151 are stored and/or loaded with each thread switch. For instance, in cases where one or more register values other than thePC186,μPC188, and status register are affected in a second thread, it may be desirable to store all register values via a full-context store. In such cases, the 32-bit instructions described above and shown inFIG. 6 may comprise data (e.g., in bits31:24 and3:0) that causes a full-context store to be performed. Similarly, a 32-bit instruction may comprise data that causes a full-context load to be performed. Further, as also described above, a full-context store and/or load may be initialized by a command from theMPU104 instead of by code being executed in thread T1. A full-context store and/or load is performed in a similar manner to a minimum-context store and/or load, with the exception being a difference in the number of registers stored and/or loaded.
As explained above, the technique of storing contexts during thread switches may be used to service commands received by theJSM102 from theMPU104. For example, in performing a series of tasks, theMPU104 may determine that delegating one or more tasks to theJSM102 would be expeditious, so that theMPU104 may allocate its resources to performing other tasks. In such a case, theMPU104 sends a command to theJSM102, instructing theJSM102 to perform a particular task. The command is coupled with a parameter, which parameter preferably comprises the address of a micro-sequence. TheJSM102, upon receiving the command and the associated parameter, stores the parameter in a suitable storage unit, such as aregister140, anauxiliary register151 or on any one of the stacks in theJSM102. TheJSM102 then uses the parameter (i.e., the micro-sequence address) to locate the micro-sequence in themicro-sequence storage180. Upon locating the micro-sequence, theJSM102 retrieves the micro-sequence and executes the micro-sequence, thus obeying the command sent from theMPU104. The micro-sequence preferably is pre-programmed into themicro-sequence storage180.
In obeying the command from theMPU104, theJSM102 may be required to pause whatever task it is completing at the moment the command is received from theMPU104. More specifically, theJSM102 may be performing a particular task or executing a sequence of code in a first thread T1 when it is interrupted with the command from theMPU104. In order for theJSM102 to service the command, it must first pause the execution of the first thread T1 at a switch point and bookmark the switch point by storing the context of the first thread T1. TheJSM102 then may service the command from theMPU104 in a second thread T2. Once the command from theMPU104 has been serviced, theJSM102 may resume execution at the switch point in the first thread T1 by retrieving the stored context of the first thread T1.
TheJSM102 stores contexts and retrieves contexts in a manner similar to that previously described. In particular, when storing the context, theJSM102 stores either a full context or, preferably, a minimum context. When storing a full context, theJSM102 pushes all available registers140 (and optionally auxiliary registers151) onto theT1 stack123. When storing a minimum context, theJSM102 pushes thePC186, theμPC188, the status register R15 and optionally a fourth register value onto theT1 stack123. In either case, before shifting to the second thread T2, theJSM102 also stores the value of the stack pointer (i.e., register R6) in any suitable memory (e.g., DRAM126). As previously described, theJSM102 stores the value of the stack pointer so that, when it is ready to resume executing thread T1, theJSM102 is able to locate the context information that is on theT1 stack123. Specifically, once theJSM102 has serviced the command from theMPU104 and is ready to resume executing thread T1 at the switch point, theJSM102 uses the stack pointer to locate the context information on theT1 stack123. Once the context information is located, theJSM102 pops the context information off of the stack and stores the context information to the appropriate registers (e.g., registers140 and/or auxiliary registers151) in theJSM102. TheJSM102 then may resume executing in thread T1.
This technique is summarized inFIG. 8. Theprocess800 shown inFIG. 8 begins with theMPU104 determining to delegate a task to the JSM102 (block802). Accordingly, theMPU104 delegates the task by sending to the JSM102 a command along with a parameter (block804). The command instructs theJSM102 to use the parameter (i.e., an address of a micro-sequence) to find a corresponding micro-sequence and to process the micro-sequence. Once theJSM102 receives the command from theMPU104, theJSM102 pauses processing of a current thread T1 at a switch point (block806). TheJSM102 then stores the context of the thread T1 at the switch point (block808). TheJSM102 then uses the parameter received from theMPU104 to find the micro-sequence (block810). As explained above, the parameter contains the address of this micro-sequence. TheJSM102 subsequently retrieves and executes the micro-sequence in a thread T2 (block812). After executing the micro-sequence, theJSM102 restores the context of the switch point in thread T1 (block814). Finally, theJSM102 may resume executing thread T1 at the switch point (block816). Such a technique is not limited to commands received from theMPU104. Instead, theJSM102 may apply this technique to any task delegated to theJSM102, such as an interrupt, exception routine, etc.
System100 may be implemented as amobile cell phone415 such as that shown inFIG. 9. As shown, the battery-operated, mobile communication device includes anintegrated keypad412 anddisplay414. TheJSM processor102 andMPU processor104 and other components may be included inelectronics package410 connected to thekeypad412,display414, and radio frequency (“RF”)circuitry416. TheRF circuitry416 may be connected to anantenna418.
Although the above embodiments have been described in the context of dual processor cores, the techniques described herein also are applicable to any number of processor cores. For example, thesystem100 may comprise theMPU104, theJSM102, as well as at least one additional processor core. The host processor (i.e., the MPU104) may delegate tasks to theJSM102 as well as any of the additional processor cores, using the techniques described above.
While the preferred embodiments of the present invention have been shown and described, modifications thereof can be made by one skilled in the art without departing from the spirit and teachings of the invention. The embodiments described herein are exemplary only, and are not intended to be limiting. Many variations and modifications of the invention disclosed herein are possible and are within the scope of the invention. Accordingly, the scope of protection is not limited by the description set out above. Each and every claim is incorporated into the specification as an embodiment of the present invention.