[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [r6rs-discuss] [Scheme-reports] Date and time arithmetic library proposal for R7RS large Scheme
On Mon, 2010-11-29 at 17:15 -0500, John Cowan wrote:
> Thomas Bushnell, BSG scripsit:
>
> > If the interface says "number of seconds since the epoch, not counting leap
> > seconds" (which is what Posix's gettimeofday is), then let it be that. "Add
> > 1" to the value means "add one second". The precision is about the precision
> > of the particular numeric representation.
>
> Anything described as "number of seconds" is evidently an integer,
> since we count things with integers. And yet you say not to think
> about integers.
There are numbers of seconds which are not integers. That's okay.
> > representation to be exact. So I agree that there should be no
> > recommendation about what sort of numeric format to use. Keep in mind that
> > "inexact rational" does not mean "floating point" in Scheme.
>
> In principle, no; in practice, it definitely does. There are no Schemes
> out there which use something other than floats for inexact rationals,
> and the great bulk of them use 64-bit IEEE floats only.
If we're talking standards here we should just say "inexact real" and
not worry about how particular implementations represent those.
> > I think we should have a interface for it, but alas, Linux and Posix don't
> > provide a way to get it. Given NTP and the granularity of clock interrupts,
> > the accuracy is known in some sense to the system as a whole, but in
> > practice difficult to determine.
>
> It sounds like you are talking about precision, not accuracy. If the
> clock is off by a day because I botched setting it, do you expect the
> system to know that and report it?
I'm willing to forgive a scheme system that doesn't return a time
value more accurate than the system clock. That is a justifiable
implementation restriction. The system clock can't know whether it
is correctly set, but it can generally know its own precision.
It is reasonable IMO to require a time library to report the
available precision.
In some circumstances a system clock can have a known inaccuracy,
but I don't think it's reasonable to anticipate and report all the
different forms that could take; one that I remember from a real
system was days/hours/minutes/seconds driven by a low-granularity
but accurate timer, with "fake microseconds" driven by a counter
on the machine's instruction dispatch. All you could say about
microseconds on that system was that a greater number meant a
later time, and that the machine would "usually" count up to
some "randomish" number between 600K and 700K microseconds
before dropping back to zero at the start of the next second
but "occasionally" the count might be as low as 400K or as high
as 950K depending on what the machine was doing, because some
instructions take fewer cycles than others.
But I don't think many systems, even embedded systems, are still
dealing with kluges and hacks like this; and given that different
kluges and hacks can yield a near-infinite variety of known
inaccuracies, I can't imagine a useful way to report them anyway.
So I don't think the standard ought to worry about it. If a
particular implementation has to deal with something like that,
just let it return something as accurate as the system clock,
even if that's not very accurate, and consider it an implementation
restriction imposed by the environment.
Bear
_______________________________________________
r6rs-discuss mailing list
r6rs-discuss@x
http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss