As an IC verifier, how to effectively read specs for cross-validation and detailed design

EETOP Forum verification sub-forum users watery smoke questions:

How should the verifier read the spec

In the development process, the points of attention of the design and verification personnel are definitely different. Especially in the understanding of the spec, the verification personnel often need to have their own independent understanding. When getting the spec, as a validator, how to refine the functions and convert them into corresponding inference models to achieve cross-validation with detailed design. What experience do you have to discuss?

The following are the netizens to answer:

Jimbo1006:

I think when the verifier looks at the function points in the spec, it needs to pay attention to the input, output and the time from input to output. First of all, "the time from input to output", which is the internal delay of RTL, I think this is the biggest difficulty in designing the reference model. Because even if you ask someone who writes a spec, he probably doesn't know. At this time we will ask the designer or look at the RTL code, but then we are likely to be influenced by the designer's ideas. The ref model can be wrong with RTL, so your verification environment may never be Can't find the bug. Then from input to output, this is like a truth table. All we need to do is design and constrain the stimulus according to a random strategy. But as the complexity of logic increases, this truth table will grow larger and larger, and it will be difficult for us to write it all. At this time we can divide this large truth table into several small truth tables just like designing a large module. It's simple to say, but the amount of work here increases with the logical complexity index. If you want to do the so-called cross-validation, it is better to find a designer, and also design such a module, and then they compare the results, and then run back the delay, no need to verify the personnel. Finally, when I was designing at the university, I first designed the reference model (write it in C++, run it with software to see the effect), then design the module according to the reference model, and finally design the FPGA on the module. It’s over when you run. If you add a verification step here, you can directly use this reference model to verify. So I think it should be a reference model first, and then there is an RTL that needs to be verified. This is the most reasonable process. But when I am actually working, I have RTL and ref model first. The company where the landlord is located should be the same, otherwise I will not ask this question. We analyze that the ref model is written according to the spec, but it has to obtain the internal delay of the RTL, and then we use this part of the logic to derive the RTL from the RTL ref model. If you are a little careless, you will let go of the bug, because in general, the auto_check in our scoreboard is used directly for the output of the ref model, and will not check its internal logic.

Zxm92:

1. The subject's excessive attention to the reference model, the side reflects the subject is more or from the designer's point of view to do verification, I also obsessed with the reference model at the beginning, automatic comparison for the initial verification of the people Pleasure. 2. Upstairs said the input to output time verification, I think the reference model is more focused on the comparison of the data stream, the checker above the timing can use assertion. Personally think that the angle between the designer and the verifier is different: 1. The designer usually stands in the perspective of function implementation, and the verifier should be more from the user's point of view, that is, how to use the chip made. 2. How to use the chip, determine your motivation, your motivation determines the situation of the chip, the chip can be kept in all possible conditions, then the quality of the chip is guaranteed, so how to design your Motivation is the core of verification.

Jimbo1006 Reply zxm92:

I don't work long. I am particularly confused about the reference model. I feel that if I write this, I will pass some logic to the ref model. If this part of the logic is wrong with the DUT, it may cause a bad situation. . The reference model is more focused on the comparison of data streams. I agree with this because I have verified the UART module. When this type of verification function is used, the result is mainly concerned with the value in the register. The ref model is handy. And later by reading UVM_COOKBOOK found that such a data stream comparison can not be ref model. Just design the transaction in the slave agent, and then directly communicate with the master agent's transaction to the scoreboard is enough. I tried it with assertion to verify the timing, but then I gave up. Because of writing assertion I need to know the time between the occurrence of 2 events. Because the logic is very complicated, the DUT has many internal signals. For the verification of a certain function point, the whole signal can be transmitted as "input->internal signal a->internal signal b->...output". Input and output can be seen through sepc, but the time spent inputting to the output is probably not known even if we ask the person who wrote the spec. Here I started to choose to write assertion, because I can't trust the internal signal, I can only go directly to the time from input to output. When I asked the designer, I found that he also introduced it according to these internal signals, so I only Can choose the imitation waveform to confirm. But such a large amount can't rely on the waveform, so I designed the output waveform according to my own understanding. I compare this waveform with the actual output waveform of dut. As long as the uvm is wrong, I will go to the system engineer and design. Personnel confirmation. At this time, I found that even the SV's assertion can't satisfy my requirements. I can only use SV or C++ to take the logic of automatic check. The motivation in verification is very important, I agree, but I don't think it is the core. For the UVM verification methodology, I think the core should be how to judge whether the results after the stimulus input are correct and the coverage design. Coverage design is like an outline, the incentive is just to write step by step according to the outline (it should be better to divide it into two people), and SV is really suitable for doing this.

Like water:

Everyone feels very deep, and can also see that they are groping on this road of verification. Comparing with the statement upstairs, black box verification is undoubtedly relatively labor-saving, only need to pay attention to input and output, more focus on how to judge right and wrong and build incentives based on coverage. But the root of these problems is actually in the understanding of the spec, this I estimate that in addition to good methods, but also focus on accumulation, there is no good experience in this area.

Jimbo1006:

1. When reading the spec, carefully read each sentence and think of yourself as a customer. Think about what the customer thinks when they see this sentence. How will it be used?

2. Carefully scrutinize each function point and think about it from multiple angles. For example, when an enable signal is turned on, an output is 1. But when this enable signal is turned off, what is the output, 0, 1 or donot care. If the spec is not clearly stated, we need to find the system engineer (the person who wrote the spec) to confirm.

3. At the stage of designing the module, system engineers and designers are likely to have some tacit understanding because of long-term cooperation. For example, after a switch is turned on, dut needs several clocks to pick this signal. The spec is described in the ideal situation. This tacit understanding may increase the efficiency of the design module, but it is a big problem for our verifiers.

Because spec every sentence has the possibility to hide such a tacit understanding. And this tacit understanding will not only affect the efficiency of verification, but also affect the reliability of verification. For example, I wrote an automatic comparison code for a function point according to the ideal spec. After the simulation, I found that the result was wrong. As a result, the system engineer told me that there is such a tacit understanding. I can only automatically compare the code and find out where it is involved. This point, then change back.

But there is actually a risk here, assuming that my automatic comparison code leaks these clocks somewhere, causing the output of a certain configuration to always output the default value of 0 (the correct output should be 1), and the DUT configuration The error output is also 0, so I think that the automatic comparison and the DUT are correct, resulting in missing the BUG.

Zxm92:

1. Write assertion I need to know the time between the two events --> I think this sentence should be replaced by spec does not define the time when two events occur, if there is no standard for defining time, but also to check, not assertion I can't do it, but I can't do anything else. Suppose I am worried that the response time from input to output is too long, and there is no spec. I will record the input time, record the output time, take the corresponding input and output time to calculate the time of both, get the maximum value, and then consider this. Is the maximum value too large?

2. Design the output waveform according to your own understanding, compare this waveform with the actual output waveform of dut, as long as uvm reports an error, go to the system engineer and designer to confirm --> do not understand how to design the output waveform, if The input to output time range is 3us ~ 5us is correct, how do you design the output waveform.

3. Coverage design is like an outline. The incentive is just to write step by step according to the outline.-> If you are talking about function coverage, I think coverage and motivation should not be written by the same person. Exam questions and exams should not be taken. It is the same person. Incentive and coverage are two manifestations of scenario. It is not the first coverage, but the incentive to cover it, and there is no incentive to write coverage. Assuming that dut has ten kinds of functions, do these ten functions have to be done serially, or can they be done in parallel? Is there a limit to the order of serialization? What do you need to sync in parallel? Often the error will appear in scenarios that are not thought of.

4. The core should be how to judge whether the result after the stimulus input is correct--> judge the result after the stimulus input, first have the incentive input, only the result, only the judgment. Incentives are not perfect enough, and judging correctly does not mean dut correct

5. When reading the spec, you should carefully consider each sentence after reading it, think of yourself as a customer, and think about what the customer will think when they see this sentence. How will it be used?

--> This sentence and the sentence I replied before is a meaning. "Designers usually stand in the perspective of function implementation, and the verifier should be more from the user's point of view, that is, how to use it. chip"

Jimbo1006 reply 7# zxm92 :

I am very pleased that most of our views are the same or similar, because the main purpose of my discussion in the forum is to test my methods and some ideas. Whereas our views are different, I think the main reason is "automatic comparison" above. As you said in 3L, as a first-time verification, the "automatic comparison" is too tempting for me. My current thinking is that "automatic comparison" is current, at least for future verification personnel. The main value is also the romance of our validators. I have only verified several modules now, and most of them are verified by automatic comparison. And I also found a lot of BUGs that designers and system engineers are amazed. At the review meeting, when they asked me how to verify such a special case, the sense of accomplishment and satisfaction that I generated made me unable to extricate myself. You try to compare my verification environment automatically, and some of my opinions may be accepted.

The following points correspond to the points on your 7th floor.

1. I need to know the time when two events occurred because I was designing the code for automatic comparison. I found that it is difficult to count the output of the DUT under all configurations without using the internal signals of the DUT or the intermediate signals I designed myself. Happening. This is equivalent to the designer when designing a large module, he will be divided into a small module. And I also need to do this when designing automatic comparisons. I will design an intermediate point because there are many sets of inputs and corresponding outputs, so the input to the middle and then the output is not a simple linear structure, but a mesh. The mesh structure will cause me to know the time to these intermediate points for each configuration, at least roughly the time. Of course, if the system engineer can give me a form with all the combinations of inputs and the corresponding output and the time between them, then there is no such thing. But they certainly can't do it. Even if it is done, several configurations are linearly input to the DUT in different combinations, and the corresponding output time may also change.

2. The automatic comparison of the output waveforms is the practice of designing the Motivational Control Module (composed of many PWM-related modules). I designed a collect module to connect with the monitor of the slave agent. Based on the output signal of the DUT and the corresponding output oe signal, a transaction is designed. The transaction will reflect the value of the output waveform (0 or 1). The value continues. Time (to sample the acknowledgment output signal at the sampling frequency of the system clock/2+1), whether the output waveform is in a high-impedance state (the output oe signal is 0), and so on. Then I pass this transaction to scoreborad, and the input combination I will pass to the socreborad through the master agent's monitor. Then I performed an automatic comparison of the so-called ideal waveform and output waveform at scoreborad. Use the fork join function to design a 3-segment concurrent path. The first one is random for one time. The second one compares the ideal waveform under the input configuration with the actual output waveform in real time. The third strip modifies some parameters, such as the sampling frequency, as needed. The range of 3us~5us that you mentioned is correct. There are many solutions in this structure. In order to improve the reusability of the specification structure, I can insert 2 fork joins in the second path, each 2 Concurrent path, the first time is 3us and 5us respectively (5us sends a flag1), the second is to compare the waveform, but the last 3us corresponds to another flag2, in the end I design a piece of code will Confirm all falgs of the design.

3. Coverage and motivation really shouldn't be written by one person. I am also this view. But at present, our company is studying UVM verification by myself. Where can I find someone to write motivation? Even if you recruit someone in the future, it is impossible for two people to do a module verification for the time being. After all, it takes a lot of time to understand the spec. Then you think about it, if I really designed the coverage and automatic comparison code, I can find a few undergraduate freshmen, let them use every means, only to meet the coverage requirements and pass my Automatic comparison, it is OK. This will save a lot of time and labor costs. The person who writes the stimuli and the person who writes the coverage check each other naturally is very beautiful. Under careful analysis, the real decision-making power is actually in the hands of those who write coverage. You said that the 10 functions parallel and serial problems, I just encountered in the verification MCM module, the solution refers to point2.

4. As I mentioned in point 3, as long as I design the coverage, you can check the coverage at any time after the simulation (I use VCS+verdi), where you can directly find which points you have not covered. People who write incentives can redesign or add new incentives. Of course, I don't mean that writing motivation is not important. Incentives can be feedback through the percentage of coverage, but the coverage design is not perfect. Who can feedback (code coverage and functional coverage can be detected each other, but the reliability of this is really not high)?

5. I am very new and our view is the same.


Surge Protector and Arrester

An electrical appliance used to protect electrical equipment from high transient overvoltage hazards and to limit the duration of continuous flow.This term includes any external clearance necessary for the normal functioning of the appliance during operation and installation, whether or not it is a unit as a whole.

Surge protector, also known as lightning arrester, is an electronic device that provides safety protection for all kinds of electronic equipment, instruments and communication lines.When the electric circuit or communication lines or for outside disturbance suddenly produce peak current in voltage, surge protector in a very short time conduction tap, to avoid surge damage to other devices in the circuits.[1]

Surge protector, suitable for ac 50/60 hz, rated voltage 220 v to 380 v power supply system, the indirect lightning and thunder and lightning directly affect transient over voltage surge protection, or other applicable to the family home, the third industry and the surge protection industry requirements.


Surge Protector,Surge Voltage Protector,Eco-Friendly Surge Protector,Lightning Protection Surge Protector

YANGZHOU POSITIONING TECH CO., LTD. , https://www.cnfudatech.com