$time(5)$time - The current system time
$time "string"
$unix-time integer"
$time is a constantly changing variable which is set to the current system time. The format of $time is "YYYYCCCMMDDWhhmmssSSS", where:-
YYYY
CCC
MM
DD
W
hh
mm
ss
SSS
$time can be set to an integer value which is a time offset in seconds, for example if the following was executed;-
set-variable $time "3600" ml-write &cat "$time is " $time set-variable $time "0"
The written time would one hour ahead of the system time.
$unix-time is also a constantly changing variable, however this variable cannot be set. Its value is a count of the number of seconds since 1970-01-01 00:00:00 UTC, otherwise known as the UNIX epoch time, presented as a floating point number with a guaranteed 9 decimal places so its length is 20 characters. The focus of the fraction part is more on providing a sub-second timer at the highest resolution the system can realistically provide, however, the actual resolution is platform specific and should only be considered accurate to 10ms at best, see example section below.
The following macro uses $time to calculate the time taken to execute a user command:-
define-macro time !force set-variable #l2 @1 !iif ¬ $status set-variable #l2 @ml00 "Time command" set-variable #l0 $time !force execute-line #l2 set-variable #l1 $time set-variable #l2 &add &mid #l0 16 2 &mul 60 &add &mid #l0 14 2 &mul 60 &mid #l0 12 2 set-variable #l3 &add &mid #l1 16 2 &mul 60 &add &mid #l1 14 2 &mul 60 &mid #l1 12 2 !if &les &set #l4 &sub &rig #l1 18 &rig #l0 18 0 set-variable #l2 &add #l2 1 set-variable #l4 &add 1000 #l4 !endif ml-write &spr "Command took %d.%03d sec" &sub #l3 #l2 #l4 !emacro
The following implementation uses $unix-time and &fsub(4):
define-macro time !force set-variable #l2 @1 !iif ¬ $status set-variable #l2 @ml00 "Time command" set-variable #l0 $unix-time !force execute-line #l2 set-variable #l1 $unix-time ml-write &spr "Command took %.6f sec" &fsub #l1 #l0 !emacro
This implementation of $unix-time suggests a nanosecond accuracy, however this is unrealistic due to the granularity/limitations of the system clock, a more realistic level would be around 1 microsecond. However, at this level the variability of the CPU task scheduler, the time taken to get the time and the algorithms used can all degrade the actual accuracy of the measure. For example, the above macro code use floating-point maths (&fsub), Double precision floats, as used by MicroEmacs, only provide around 15 to 16 significant figures of accuracy. Consider the following:
set-variable #g1 &fsub "1234567890.000010000" "1234567890.000000000" set-variable #g2 &fsub "1234567890.000001000" "1234567890.000000000" set-variable #g3 &fsub "1234567890.000000100" "1234567890.000000000"
The answer given for #g1 typically has an error of around 0.1%, an error of 5% for #g2 and an answer of 0 for #g3. This means if simple floating point maths is used the maximum accuracy is 1 microsecond +/- 5%. Attempting to measure performance down to this level is generally a waste of time, it would be far better to increase the length of the task (i.e. run 1000 times and divide the total time by 1000).
(c) Copyright JASSPA 2025
Last Modified: 2025/09/06
Generated On: 2025/09/29