[ale] UTC time vs GPS time, not the same???

alan at alanlee.org alan at alanlee.org
Fri Jan 20 00:25:40 EST 2012


Computer motherboards use cheap crystals.  Even a very small error (< 100 ppm)
is continually compounded over time and adds up fast.  Most PC motherboards I've
seen are in the range of 250 to 500 ppm off.   Most accurate consumer grade
crystals are in the range of 15-45 ppm and even that error will add up in a
relatively small amount of time.  GPS is accurate to a few ppb (yes b).
 
I've experienced lots of problems with using a PC generated clock to stream
video in a broadcast environment.  The error is so far off, most broadcast
equipment can't even track it.
 
I've watched my VCXO clock recovery algorithm track a very
accurate satellite transponder clock (FOX).  The slew rate is normally in the
range of +/- 1-2 ppb.  The bit rate didn't exactly match the transponder
frequency so every few minutes the uplink added an extra bit.  When it did, you
could see the clock recovery jump up to +/- 10 ppb then track back down.  It's
pretty amazing how accurate some of the television/satellite broadcast clocks
are.
 
ppm = parts per million or microseconds per second
ppb = parts per billion or nanoseconds per second
 
-Alan
 


On January 19, 2012 at 11:52 PM Ron Frazier <atllinuxenthinfo at c3energy.com>
wrote:

> Hi guys,
>
> I've been doing some additional research on computer time keeping and
> such.  I just read that GPS time does not account for leap seconds (the
> seconds periodically added to match UTC time with astronomical time). 
> The statement also said that because of this, there is about a 15 second
> difference between GPS time and UTC time, even though the clocks are
> highly accurate.  Does anyone know if this is true?  If so, a GPS clock
> might not be the best source for time on a computer network, especially
> if computers being communicated to are being synced to UTC.
>
> I'd also REALLY like to know why the clocks in computers are so widely
> variable.  I know the software clock in the OS is synced to the hardware
> clock at boot.  But, after that, it apparently varies widely in
> performance, even though it's receiving periodic interrupts from the
> hardware clock.  Is it really the case that some routines switch off the
> hardware interrupts, causing the software clock to miss cycles?  If
> that's true, why are user level programs allowed to do that.  You'd
> think processing the hardware interrupt from the hardware clock would be
> a pretty important thing.
>
> Thanks in advance for any info you share.
>
> Sincerely,
>
> Ron
>
> --
>
> (PS - If you email me and don't get a quick response, you might want to
> call on the phone.  I get about 300 emails per day from alternate energy
> mailing lists and such.  I don't always see new messages very quickly.)
>
> Ron Frazier
>
> 770-205-9422 (O)   Leave a message.
> linuxdude AT c3energy.com
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.ale.org/pipermail/ale/attachments/20120120/98e36d77/attachment.html 


More information about the Ale mailing list