Multi-task operating system principle of single-chip microcomputer 51

I have been thinking for a long time, do you want to write this article? Finally, I think there are still a lot of people interested in the operating system, write it. I don't necessarily make jade, but I can throw bricks.

Many people, including me, are pessimistic about the use of the operating system, because 51 has too few on-chip resources. But for many systems that are not demanding, using the operating system can make the code more intuitive and easy to maintain, so there is still an opportunity for the operating system to survive on 51.

The popular uCos, TIny51, etc., are not suitable for use on films such as 2051. They account for a lot of resources. Only by doing their own thing, it is necessary to change the system to make 51 operating systems available. The purpose of this post is to teach everyone how to write an OS on the spot instead of giving everyone an OS version. All the code provided is also sample code, so don't say LAJI because it doesn't have any function. If you write the function all over, it is estimated that you don't want to watch it. Secondly, it loses flexibility and has no value.

The following example is posted. It can be clearly seen that the OS itself has less than 10 lines of source code, the compiled object code is 60 bytes, and the task switching consumes 20 machine cycles. In contrast, KEIL's embedded TINY51 target code is 800 bytes, and the switch consumes 100 to 700 cycles. The only downside is that each task takes up a stack of ten bytes, so the number of tasks can't be too much. It's a bit difficult to use in 51 of 128B memory, but it's not a big problem for 52. This code is measured on the STC12C4052 with 36M frequency, and the switching task only needs 2uS.

#include

#define MAX_TASKS 2 //The number of task slots. Must be the same as the actual number of tasks

#define MAX_TASK_DEP 12 //Maximum stack depth. The minimum is not less than 2, and the conservative value is 12.

Unsigned char idata task_stack[MAX_TASKS][MAX_TASK_DEP]; //Task stack.

Unsigned char task_id; //current active task number ///task switching function (task scheduler)

Void task_switch(){

Task_sp[task_id] = SP;

If(++task_id == MAX_TASKS)

Task_id = 0;

SP = task_sp[task_id];

} //Task load function. Load the specified function (parameter 1) into the task slot of the specified (parameter 2). If there is a task in the slot, the original task is lost, but the system itself does not have an error.

Void task_load(unsigned int fn, unsigned char TId)

{

Task_sp[TId] = task_stack[tid] + 1;

Task_stack[tid][0] = (unsigned int)fn & 0xff;

Task_stack[tid][1] = (unsigned int)fn 》 8]

}//Run the task schedule from the specified task. After calling this macro, it will never return.

#define os_start(tid) {task_id = tid,SP = task_sp[tid];return;}

/*====================== The following is the test code ====================== */ void task1()

{

Static unsigned char i;

While(1){

i++;

Task_switch(); // put a breakpoint here after compiling

}

} void task2()

{

Static unsigned char j;

While(1){

j+=2;

Task_switch(); // put a breakpoint here after compiling

}

} void main()

{

//There are two tasks loaded here, so you must also define 2 when defining MAX_TASKS

Task_load(task1, 0); //Load the task1 function into slot 0

Task_load(task2, 1); //Load the task2 function into slot 1

Os_start(0);

}

Such a simple multitasking system can't be called a real operating system, but as long as you understand its principles, you can easily extend it very powerful, want to know how to do it?

one. What is an operating system?

The human brain is more likely to accept the expression of "analog", I use the "transit system" to analogize with the "operating system".

When we want to solve a problem, we use a certain means of processing to complete it. This is what we often call "methods", which are called "programs" in computers (sometimes called "algorithms").

In the case of behavior, when we want to walk from A to B, we can walk, or fly, we can go straight, or we can detour, as long as we can go from A to B, it is called method. This kind of demand from A to B is equivalent to the "task" in the computer, and the method of realizing the way from A to B is called "task processing flow".

Obviously, not all of these moves are reasonable, some fools will adopt them, and some are fools who will not use them. In the words of a computer, some tasks are handled well, some tasks are processed well, and some processes are poor.

There are several ways to really sum up the methods:

Some moves are faster, suitable for people who are in a hurry; some are more convenient, suitable for lazy people; some are cheaper and suitable for the poor.

In the words of a computer, some provinces have CPUs, some have simple processes, and some have low requirements on system resources.

Now we can see a problem:

If all the resources in the world are for you (one task is to monopolize all resources), then the best way to meet your needs is a good way. But in fact, there are many people going out, such as 10 people (10 missions), but only 1 car (1 set of resources), which is called "resource competition."

If everyone wants to use the method that best suits his needs, then the driver has to give them a glimpse of one person, and at any one time, there is only one passenger in the car. This is called "sequential execution", we can see that this method is a serious waste of system resources.

If we don't have the mana to turn a car into 10 cars to send these 10 people, we have to make some mechanisms and agreements to make one car look like 10 cars. To solve this problem, everyone must know that It is to develop bus routes.

The easiest way is to string all the starting and ending points that passengers need to go. The car is on this line and the passengers decide to get on and off. This is the simplest bus route. It is very bad, but at least solve the guest's competition for the car. Corresponding to the computer, is to mix the code of all tasks together.

This is neither elegant nor efficient, so the driver thinks of a way to ask these customers to discuss together, list the starting and ending points of all passengers, count the frequency of use of these lines, and then develop bus routes. : Some routes can be combined to form a route, and those routes that cannot be merged will be opened separately. This is called “task definition”. In addition, for people with multiple routes, the number of trains is more than one point, and the time is also prioritized. This is called “task priority”.

After such an arrangement, although there is still only one car, the carrying capacity is mostly. This set of trains/routes is a set of “transit system”. Ha, what do you know about the operating system? It is also such a kind of agreement.

operating system:

Let's go back and summarize:

Automotive system resources. Mainly refers to the CPU, of course, there are other, such as memory, timers, interrupt sources and so on.

Customer travel mission

Route process

One-by-one delivery passengers are executed sequentially

Simultaneously transport all passengers in parallel with multiple tasks

Make routes based on different usage frequency and prioritize running more busy route tasks

There are various resources in the computer. From the hardware, there are CPU, memory, timer, interrupt source, I/O port and so on. It also spawns a lot of software resources, such as message pools.

The existence of the operating system is to allow these resources to be allocated reasonably.

Finally, let us sum up, the so-called operating system, with our current expediency understanding is: an agreement made to "solve the computer resources contention."

Operating system on 2.51

For an operating system, the most important thing is parallel multitasking. Here to clarify, don't take things from the DOS of the year, the times are different. Moreover, when IBM and Bill were in a hurry to move the PC to the market, they copied the PLM (as if it were called this name? It’s not clear) and made a DOS that seems to be “roughly made” today. Look at the real operating system at the time -- UNIX, it is already multitasking when it is still on paper.

For our PC, it is not a problem to achieve multitasking, but switching to the MCU is a headache:

1. Less system resources

On the PC, the CPU frequency is in G, the memory is in GB, and the MCU's main frequency is usually only a dozen M, and the memory is Byts. Running multiple tasks at the same time on such a small resource means operation The system must use as little hardware resources as possible.

2. High real-time task requirements

PCs don't need to be too concerned about real-time, because almost all real-time tasks on the PC are taken over by dedicated hardware. For example, all sound card NICs have built-in DSP and a large amount of cache. The CPU just needs to sit there and point to the hand. The foot tells how these boards can handle real-time information.

Unlike MCUs, real-time information is handled by the CPU, and the cache is very limited or even cached. Once the message arrives, the CPU must respond in a very short time, otherwise the information will be lost.

Take serial communication as an example. In a standard PC architecture, huge memory allows information to be stored for long enough. For the MCU, the memory is limited. For example, 51 has only 128 bytes of memory, and 8 to 32 bytes of the register group are also deducted, so usually only a few bytes are used for buffering. Of course, you can combine the process of receiving and processing data, but for an operating system, this is not recommended.

Assuming that data is transmitted to the MCU at a communication rate of 115,200 bps, the transfer time per byte is approximately 9 uS. Assuming that the cache is 8 bytes, the serial port processing task must respond within 70 uS.

Both of these problems point to the same solution: the operating system must be lightweight and lightweight, and it is best not to take up resources (then of course it is a dream).

There are many operating systems available for the MCU, but it is suitable for 51 (51 here refers to 51 without extended memory). I saw a "circle operating system" a while ago, which is the lightest of the operating systems I have seen, but there is still room for improvement.

Many people think that 51 is not suitable for operating systems at all. In fact, I am not completely accepting this statement, otherwise there is no such article.

My opinion is that 51 is not suitable for "universal operating systems." The so-called general-purpose operating system is no matter what kind of application requirements you have, no matter what chip you use, as long as you are 51, the same operating system.

This kind of idea is no problem for the PC, it is also good for the embedded, it is okay for the AVR, but not for the 51 "poor" MCU.

How to do it? Tailored, the site builds an operating system based on demand!

Seeing this, it is estimated that many people have to turn their eyes, in general two:

1. The operating system is so complicated, saying that it is created, when is it God?

2. The operating system is so complicated, will it be a BUG on the spot?

Haha, see it clearly? The problem lies in the "complexity". If the operating system is not complicated, the problem will not be solved.

In fact, many people understand the operating system is one-sided, the operating system does not have to be very complicated and comprehensive, even if it is only a multi-task parallel management capability, you can also call it the operating system.

As long as you have an understanding of the principle of multitasking parallelism, it is not difficult to write one out on the spot, and once you have done this, arrange communication protocols for each task to develop it into a tailor-made system for your application. The operating system is not difficult.

In order to deepen the understanding of the operating system, you can take a look at the "evolution" PPT, let you fully understand how a parallel multi-tasking evolved from the sequential process step by step. It also mentions the "state machine" that many people use, and you will find how similar the operating system and state machine are in principle. The program will be written by the state machine and the operating system can be written.

Three file:///C:/DOCUME~1/ADMINI~1/LOCALS~1/Temp/msohtml1/01/clip_image001.gif

My first operating system

Go directly to the theme, first post a demonstration of the operating system. As you can see, the original operating system can be done simply.

Of course, here to be affirmed, this thing is not really a real operating system, it has no other functions except parallel multi-tasking parallel. But everything starts with simplicity, understands it, and expands it into a real operating system based on application needs.

Ok, the code is coming.

Put the following code directly into KEIL to compile in each task? The "task_switch();" function of the () function puts a breakpoint, and you can see that they are indeed "simultaneously" executed.

#include

#define MAX_TASKS 2 //The number of task slots. Must be the same as the actual number of tasks

#define MAX_TASK_DEP 12 //Maximum stack depth. The minimum is not less than 2, and the conservative value is 12.

Unsigned char idata task_stack[MAX_TASKS][MAX_TASK_DEP];//task stack.

Unsigned char task_id; //current active task number

/ / Task switching function (task scheduler)

Void task_switch()

{

Task_sp[task_id] = SP;

If(++task_id == MAX_TASKS)

Task_id = 0;

SP = task_sp[task_id];

}

//Task load function. Load the specified function (parameter 1) into the task slot of the specified (parameter 2). If there is a task in the slot, the original task is lost, but the system itself does not have an error.

Void task_load(unsigned int fn, unsigned char tid)

{

Task_sp[tid] = task_stack[tid] + 1;

Task_stack[tid][0] = (unsigned int)fn & 0xff;

Task_stack[tid][1] = (unsigned int)fn 》 8]

}

// Run the task schedule from the specified task. After calling this macro, it will never return.

#define os_start(tid) {task_id = tid,SP = task_sp[tid];return;}

/*=================================== The following test code ======================

Void task1()

{

Static unsigned char i;

While(1){

i++;

Task_switch () ; / / compiled break here

}

}

Void task2()

{

Static unsigned char j;

While(1){

j+=2;

Task_switch () ; / / compiled break here

}

}

Void main()

{

//There are two tasks loaded here, so you must also define 2 when defining MAX_TASKS

Task_load(task1, 0);//Load the task1 function into slot 0

Task_load(task2, 1);//Load the task2 function into slot 1

Os_start(0);

}

Due to space limitations, I have simplified the code and deleted most of the comments. You can directly download the source package, complete the annotations, and bring the KEIL project file. The breakpoints are also set. Press ctrl+f5 directly. .

Now let's take a look at the principle of this multitasking system:

This multitasking system is precisely called "cooperative multitasking".

The so-called "cooperative" means that when a task continues to run without releasing resources, other tasks have no chance and way to get a running opportunity unless the task actively releases the CPU.

In this case, releasing the CPU is done by task_switch(). The .task_switch() function is a very special function, which we can call a "task switcher".

To understand how the task is switched, first review the knowledge of the stack.

There is a very simple question, because it is too simple, so I believe everyone has not noticed:

We know that whether it is CALL or JMP, the current program flow is interrupted. What is the difference between CALL and JMP?

You will say: CALL can be RET, JMP can't. That's right, but the reason is what? For 过去CALL in the past, you can use RET to jump back. Can JMP not use RET to jump back in the past?

Obviously, CALL saves some information before interruption, and the RET instruction executed before returning to the breakpoint is used to retrieve this information.

Needless to say, everyone knows that "some information" is a PC pointer, and "some method" is a push.

Fortunately, in 51, the stack and stack pointers can be arbitrarily modified, as long as you are not afraid of death. So what if the stack is modified before executing RET? Look down:

When the program executes CALL, the breakpoint address just pushed by the stack is cleared in the subroutine, and the address of a function is pushed in. After the RET is executed, the program jumps to this function.

In fact, as long as we change the stack before RET, we can jump the program to the task location, not limited to the address pushed in the CALL.

The focus is coming. .. ..

First of all, we have to open a separate memory for each task. This memory is dedicated to the stack as the corresponding task. To which task to hand the CPU to, just point the stack pointer to the memory block.

Next we construct a function like this:

When the task calls the function, the current stack pointer is saved in a variable and replaced with the stack pointer of another task. This is the task scheduler.

OK, now we just fill the original contents of these stacks correctly, and then call this function, this task scheduling will run.

So where is the original content in these stacks coming from? This is what the "task loading" function does.

Put the entry address of each task function in the "task-specific memory block" mentioned above before starting the task scheduling! By the way, by the way, this "task-specific memory block" is called a "private stack". The private stack means that the stack of each task is private, and each task has its own stack.

I have said that this is the case, I believe everyone understands how to do it:

1. Allocate several memory blocks, each of which is a number of bytes:

The "several memory blocks" mentioned here are private stacks, and how many blocks have to be allocated in order to run a few tasks at the same time. And "several bytes of each sub-memory block" is the stack depth. Remember, each subroutine requires 2 bytes. If you don't consider interrupts, the 4-layer call depth, which is the 8-byte stack depth, should be similar.

Unsigned char idata task_stack[MAX_TASKS][MAX_TASK_DEP]

Of course, there is something you can't forget, that is, the save of the heap pointer. Otherwise, how does the stack know how to get data from which address?

Unsigned char idata task_sp[MAX_TASKS]

The above two areas for loading task information, we give it a concept called "task slot." Some people call it "task heap", I think it is more intuitive

Yes, there is a task number. Otherwise how do you know which task is currently running?

Unsigned char task_id

This value is 1 when the task stored in slot 1 is currently running. When running the task in slot 2, the value is 2. ..

2. Construct the task scheduling function:

Void task_switch()

{

Task_sp[task_id] = SP; //Save the stack pointer of the current task

If(++task_id == MAX_TASKS) //The task number switches to the next task

Task_id = 0;

SP = task_sp[task_id]; // Point the system's stack pointer to the private stack of the next task.

}

3. Loading tasks:

Enter the low and high bytes of the function address of each task separately

Task_stack[task number][0] and task_stack[task number][1]:

For ease of use, write a function: task_load (function name, task number)

Void task_load(unsigned int fn, unsigned char tid)

{

Task_sp[tid] = task_stack[tid] + 1;

Task_stack[tid][0] = (unsigned int)fn & 0xff;

Task_stack[tid][1] = (unsigned int)fn 》 8]

}

4. Start the task scheduler:

Point the stack pointer to the private stack of any task and execute the RET instruction. Note that this is very knowledgeable. The brains of people who haven't played the stack are a bit turning: This RET, where is RET going? Oh, don't forget to point the stack pointer to the entry of a function before RET. Don't think of RET as RET, you can understand it as another type of JMP.

SP = task_sp[task number];

Return;

After doing these four things, the task "parallel" execution begins. You can write a normal function as a write task function, just (as you can say so now), pay attention to calling task_switch() at the appropriate time (such as where the delay was previously) to give the CPU control to other tasks. It is.

Finally, let's talk about efficiency.

The overhead of this multitasking system is that it takes 20 machine cycles per switch (CALL and RET are counted), is it expensive? Not expensive, for many multitasking systems implemented in state machines, the efficiency is not so high -- case switch and if() are not as cheap as you might think.

What I have to say about the consumption of memory is, of course, that this multitasking mechanism does occupy memory. But I suggest you don't stare at the line below the compiler "DATA = XXXbyte". That value doesn't make sense, the stack doesn't count. I will talk about comparing the memory multitasking mechanism.

In a nutshell, this multitasking system is suitable for applications with high real-time requirements and low memory requirements. I measured it on the STC12C4052 running at 36M, switching a task to less than 3 microseconds.

Next time we will talk about the things to be aware of when writing multitasking functions with KEIL.

Next time we talk about how to enhance this multitasking system, running into the operating system era.

four. Tips and precautions for writing multitasking system with KEIL

There are many C51 compilers, and KEIL is one of the more popular ones. All the examples I have listed must be used in KEIL. Why? Not because KEIL is good, so use it (of course it is really great), but because it uses some features of KEIL, if you switch to other compilers, it is not a problem to compile, but it may be stack misplaced. Various fatal errors such as context loss, because the characteristics of each compiler are not the same. So let's make it clear here first.

However, I have already said at the beginning that the main purpose of this post is to explain the principle. As long as you can digest these examples, you can also write your own OS for other compilers.

Ok, let's talk about the features of KEIL. Let's look at the following function:

Sbit sigl = P1^7;

Void func1()

{

Register char data i;

i = 5;

Do{

Sigl = ! Sigl;

}while(--i);

}

You will say that this function is nothing special! Oh, don't worry, you compile it, then expand the assembly code and see:

193: void func1(){

194: register char data i;

195: i = 5;

C: 0x00C3 7F05 MOV R7, #0x05

196: do{

197: sigl = ! Sigl;

C: 0x00C5 B297 CPL sigl (0x90.7)

198: }while(--i);

C: 0x00C7 DFFC DJNZ R7, C: 00C5

199: }

C:0x00C9 22 RET

Have you seen it clearly? This function uses R7, but does not protect R7!

Someone will jump up: What is wrong with this, because R7 is not used in the upper function. Oh, you are right, but only half right: In fact, the KEIL compiler has made a convention to release all registers as much as possible before the tune function. Under normal circumstances, except for the interrupt function, all the functions can be arbitrarily modified in the other functions without first pressing the stack protection (this is not the case, but for now, I think it is necessary to eat a bite, I will soon say of).

What is the use of this feature? Have! When we call the task switch function, all the registers can be excluded from the object to be protected, that is, only the protection stack is needed!

Now let's go back and look at the task switching function in the previous example:

Void task_switch()

{

Task_sp[task_id] = SP; //Save the stack pointer of the current task

If(++task_id == MAX_TASKS) //The task number switches to the next task

Task_id = 0;

SP = task_sp[task_id]; // Point the system's stack pointer to the private stack of the next task.

}

Seeing no, a register is not protected, expand the assembly to see, there is no protection register.

Ok, now I have to pour cold water on everyone, look at the following two functions:

Void func1()

{

Register char data i;

i = 5;

Do{

Sigl = ! Sigl;

}while(--i);

}

Void func2()

{

Register char data i;

i = 5;

Do{

Func1();

}while(--i);

}

Call func1() in the parent function fun2() and expand the assembly code to see:

193: void func1(){

194: register char data i;

195: i = 5;

C: 0x00C3 7F05 MOV R7, #0x05

196: do{

197: sigl = ! Sigl;

C: 0x00C5 B297 CPL sigl (0x90.7)

198: }while(--i);

C: 0x00C7 DFFC DJNZ R7, C: 00C5

199: }

C:0x00C9 22 RET

200: void func2(){

201: register char data i;

202: i = 5;

C: 0x00CA 7E05 MOV R6, #0x05

203: do{

204: func1();

C: 0x00CC 11C3 ACALL func1 (C: 00C3)

205: }while(--i);

C: 0x00CE DEFC DJNZ R6, C: 00CC

206: }

C:0x00D0 22 RET

See clearly? The variable in the function func2() uses the register R6 and is not protected in both func1 and func2.

Hearing here, you may have to jump again: not using R6 in func1(), why should you protect it? That's right, but how does the compiler know that func1() doesn't use R6? It is inferred from the calling relationship.

It's all right, KEIL will allocate registers for each function according to the direct call relationship between functions, neither protection nor conflict, KEIL is great! ! Wait a minute, don't be happy, try again in a multi-tasking environment:

Void func1()

{

Register char data i;

i = 5;

Do{

Sigl = ! Sigl;

}while(--i);

}

Void func2()

{

Register char data i;

i = 5;

Do{

Sigl = ! Sigl;

}while(--i);

}

Expand the assembly code to see:

193: void func1(){

194: register char data i;

195: i = 5;

C: 0x00C3 7F05 MOV R7, #0x05

196: do{

197: sigl = ! Sigl;

C: 0x00C5 B297 CPL sigl (0x90.7)

198: }while(--i);

C: 0x00C7 DFFC DJNZ R7, C: 00C5

199: }

C:0x00C9 22 RET

200: void func2(){

201: register char data i;

202: i = 5;

C: 0x00CA 7F05 MOV R7, #0x05

203: do{

204: sigl = ! Sigl;

C: 0x00CC B297 CPL sigl (0x90.7)

205: }while(--i);

C: 0x00CE DFFC DJNZ R7, C: 00CC

206: }

C:0x00D0 22 RET

Have you seen it? Haha, this time the fairy can't be counted. Because the two functions do not have a direct call relationship, the compiler believes that there is no conflict between them. As a result, a pair of conflicting registers are allocated. When the task is switched from func1() to func2(), func1() The contents of the register are destroyed. You can try to compile the following program:

Sbit sigl = P1^7;

Void func1()

{

Register char data i;

i = 5;

Do{

Sigl = ! Sigl;

Task_switch();

} while (--i);

}

Void func2()

{

Register char data i;

i = 5;

Do{

Sigl = ! Sigl;

Task_switch();

}while(--i);

}

We are just an example here, so you can still avoid register conflicts by manually assigning different registers, but in real applications, since the switching between tasks is very random, we can't predict which registers will not conflict at a certain moment, so the allocation is different. The method of the register is not desirable. So what do you do?

That'll be fine:

Sbit sigl = P1^7;

Void func1()

{

Static char data i;

While(1){

i = 5;

Do{

Sigl = ! Sigl;

Task_switch();

}while(--i);

}

}

Void func2()

{

Static char data i;

While(1){

i = 5;

Do{

Sigl = ! Sigl;

Task_switch();

}while(--i);

}

}

It is enough to change the variables in the two functions to static. You can also do this:

Sbit sigl = P1^7;

Void func1()

{

Register char data i;

While(1){

i = 5;

Do{

Sigl = ! Sigl;

}while(--i);

Task_switch();

}

}

Void func2()

{

Register char data i;

While(1){

i = 5;

Do{

Sigl = ! Sigl;

}while(--i);

Task_switch();

}

}

That is, the task is not switched within the scope of the variable, and the variables are used up, and then the task is switched. At this point, although the two tasks still destroy each other's register contents, the other party does not care about the contents of the register.

What is said above is the problem of "variable coverage". Now we systematically talk about "variable coverage."

There are two kinds of variables, one is a global variable, and the other is a local variable (here, the register variable is counted in the local variable).

For global variables, each variable is assigned to a separate address.

For local variables, KEIL will do a "coverage optimization", that is, the variable sharing space of the function without directly calling the relationship. Since it is not used at the same time, there is no conflict, which is a good thing for 51 with a small memory.

But now we are entering the world of multitasking, which means that two functions that do not directly call the relationship are actually executed side by side, and the space cannot be shared. How to do it? A clumsy way is to turn off coverage optimization. Oh, it is really stupid.

A simpler solution is to not turn off coverage optimization, but change the variables that need to cross the task in the scope (in other words, call the task_switch() function before the variable is used up). can. Here to beginners, "static" you can understand as "global", because its address space is always preserved, but it is not global, it can only be accessed in the curly braces to define {}.

Static variables have a side effect, even if the function exits, it will still occupy memory. Therefore, when writing a task function, try to switch the task after the variable scope ends, unless the scope of the variable is long (long in time), which will affect the real-time performance of other tasks. Only in this case is it considered to span the task within the scope of the variable and declare the variable as static.

In fact, as long as the programming ideas are relatively clear, there are very few variables that need to cross the task. That is to say, there are not many static variables.

After saying "covering", let's talk about "re-entry."

The so-called reentry is that a function has two different process replicas at the same time. For beginners, it may not be easy to understand. Let me give you an example:

There is a function that will be called in the main program and will be called in the interrupt. If the interrupt occurs when it is called in the main program, what happens?

Void func1()

{

Static char data i;

i = 5;

Do{

Sigl = ! Sigl;

}while(--i);

}

Assuming that func1() is executing to i=3, the interrupt occurs. Once the interrupt is called to func1(), the value of i is destroyed. When the interrupt ends, i == 0.

The above is said in the traditional single-task system, so the probability of re-entry is not very large. But in a multitasking system, it is easy to reenter, see the following example:

Void func1()

{

. ..

Delay();

. ..

}

Void func2()

{

. ..

Delay();

. ..

}

Void delay()

{

Static unsigned char i; / / Note that this is declared as static, if you do not declare static, then there will be coverage problems. And the declaration is static will cause re-entry problems. Trouble

For(i=0;i"10;i++)

Task_switch();

}

Two tasks executed in parallel call delay(), which is called reentrancy. The problem is that both replicas after reentry depend on the variable i to control the loop, and the variable spans the task, so both tasks modify the value of i.

Re-entry can only be based on defense, that is, try not to let re-entry happen, such as changing the code to the following:

#define delay() {static unsigned char i; for(i=0;i"10;i++) task_switch();}//i is still defined as static, but it is not already the same function, so the assigned address different.

Void func1()

{

. ..

Delay();

. ..

}

Void func2()

{

. ..

Delay();

. ..

}

Replacing a function with a macro means that each call is a separate copy of the code, so the memory address actually used by the two delays is different, and the reentrant problem disappears.

But the problem with this method is that every time you call delay(), it will generate a delay target code. If there is a lot of delay code, it will cause a lot of rom space. Is there any other way?

I know limited, only the last move:

Void delay() reentrant

{

Unsigned char i;

For(i=0;i"10;i++)

Task_switch();

}

After adding the reentrant declaration, the function can support reentrancy. But use it carefully, and declare that the function is extremely inefficient after re-entry!

Finally, there is an interruption. Because there is not much to say, it is not open separately.

Interrupts are no different from ordinary writing, except that in the multitasking system currently shown, there is stack pressure, so use to reduce the use of the stack (by the way, don't call subfunctions, also to ease Stack pressure)

With using, you must use #pragma NOAREGS to turn off the absolute register access. If you want to call the function in the interrupt, the function should also be placed in the scope of #pragma NOAREGS. As shown in the example:

#pragma SAVE

#pragma NOAREGS //The absolute register access must be closed when using using

Void clock_timer(void) interrupt 1 using 1 //Use using to relieve the pressure on the stack

}

#pragma RESTORE

After changing to the above, the interrupt is fixed to occupy a 4-byte stack. That is to say, if you set the task stack to 8 without interrupt, you should now set 8+4 = 12.

In addition to nonsense, the handling of the interruption must be less, just make a mark, and the rest of the matter is handed over to the corresponding task to deal with.

Now let's summarize it:

When switching tasks, ensure that no registers cross the task, otherwise an inter-task register override is generated. Use static variables to solve

When switching tasks, ensure that no variables span the task, otherwise an inter-task address space (variable) override is generated. Use static variables to solve

Two different tasks do not call the same function at the same time, otherwise a reentrant override is generated. Use reentrant declaration to resolve

Fiber Optic Lighting Membrane Switch

Fiber Optic Lighting Membrane Switch is a kind of fashion appearance, on the power supply will produce all kinds of light source around the switch of the film switch, and it is different from the general LED light film switch, Fiber Optic Lighting Membrane Switch compared to the LED light film switch luminous brightness and luminous color are better. The reason why Fiber Optic Lighting Membrane Switch has such a good effect and EL light chip is inseparable, understand the principle of EL light chip, in order to make a good light with Fiber Optic Lighting Membrane Switch from the principle.

Fiber Optic Lighting Membrane Switch,Membrane Switch For Pc Keyboard,Grey Keypad Membrane Switch,Matrix Membrane Switch

Dongguan Nanhuang Industry Co., Ltd , https://www.soushine-nanhuang.com