On 2010-11-26, at 9:00 PM, John Cowan wrote:
> Marc Feeley scripsit:
>
>> Please don't count time using milliseconds. It clutters my brain to
>> have to remember a different unit of time than plain seconds.
>
> And yet the SI unit of mass is the kilogram. But I'll think about that.
I'm not sure why you bring kilograms into the discussion. We're talking about time and the SI unit for time is the second.
>> Moreover, the choice of milliseconds, rather than microseconds or
>> nanoseconds is purely an artifact of the current speed of computers.
>
> I think it's more about range vs. precision issues.
The issue only appears if you use *integer* units of time. If you use floats, the difference between using milliseconds and seconds as units is basically absorbed by the float's exponent.
I concede that if one wants to precisely represent integer number of milliseconds, then using milliseconds as units can avoid (a very small) numerical error. For example 1 millisecond is represented exactly if you use milliseconds as units, and there is a 2e-17 error when using seconds as units because:
(exact->inexact 1/1000) = 0.001000000000000000020816681711721685132943093776702880859375
>> Integers shouldn't be used for measuring time points because
>> applications need different resolutions.
>
> That's a strong point for requiring floats, but ...
>
>> With a 64 bit float, you can represent a time interval of up to
>> 3 months with a nanosecond resolution, and up to 266 years with a
>> microsecond resolution. I don't see any practical reason for wanting
>> more than this.
>
> In the new all-64-bit world, 60-bit fixnums will have more range than
> 53-bit flonums, and they will not need to be boxed, which makes them
> faster to fling around.
There will always be a need for smaller processors because they cost less, and take less space and power. There's a need for 8 bit processors in the teeny world (thermostat), 16 bit processors (microwave ovens) and 32 bit processors (cell phones, video game consoles).
The problem with an integer representation of time is that a scaling factor has to be chosen by the *writer of the spec*. This is a much too early binding time. Different applications will have different resolution needs. You propose milliseconds, but my application needs microseconds and the next guy needs nanoseconds. Even if you decide to chose nanoseconds for the spec to cover all possible needs, somewhere someday an application will need picoseconds, so they won't be able to use your time API.
Floating point numbers were invented to handle these scaling issues.
The issue of representation of time can be avoided by not specifying the representation in the API, and having functions like:
(seconds->time n)
(milliseconds->time n)
(microseconds->time n)
(nanoseconds->time n)
Systems which use an exact integer representation for time and milliseconds as units can define:
(define (seconds->time n) (round (* n 1000)))
(define (milliseconds->time n) (round n))
(define (microseconds->time n) (round (/ n 1000)))
(define (nanoseconds->time n) (round (/ n 1000000)))
And those which use a float representation for time and seconds as units can define:
(define (seconds->time n) n)
(define (milliseconds->time n) (/ n 1000))
(define (microseconds->time n) (/ n 1000000))
(define (nanoseconds->time n) (/ n 1000000000))
Perhaps the "->time" suffix can be dropped to shorten the names.
Marc