×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
Hi,
I've used waittime function for pretty long time in cases, where I needed to have a delay in my program.
But recently a question appeared - how precise it is and are there any alternatives that allow to make delays for less than 1/100 of a second.
For an experiment I wrote a short program:
int main (int argc, char *argv[])
{
_MI_Time WaitTime ;
struct timeval tp1, tp2 ;
struct timezone tpz ;
decimal (9,3) Duration ;
decimal (19,6) StartTime, EndTime ;
int i ;
for (i = 1; i < 10; i++)
{
gettimeofday(&tp1, &tpz) ;
StartTime = tp1.tv_usec ;
StartTime = tp1.tv_sec + (StartTime / 1000000D) ;
So it makes 10 cycles producing delays (theoretically) 10, 20, 30... 100 ms.
And here's the result:
Delay = 10 ms, waittime: 18.480 ms
Delay = 20 ms, waittime: 38.960 ms
Delay = 30 ms, waittime: 59.440 ms
Delay = 40 ms, waittime: 69.984 ms
Delay = 50 ms, waittime: 50.552 ms
Delay = 60 ms, waittime: 84.936 ms
Delay = 70 ms, waittime: 83.336 ms
Delay = 80 ms, waittime: 109.912 ms
Delay = 90 ms, waittime: 119.136 ms
Isn't it strange?
I have no reasons to think that gettimeofday gives invalid resuts and the calculations are pretty simple...
Is it a fundamental feature of waittime?
Jevgeni
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.