Convert UTC seconds to actual time.

H

Harshad

Hi,
I have been searching the net for a possible solution but not able
to find a concrete solution for this.
I am trying to convert a long value of UTC seconds (1278360000) to
local time.. I could not find a a way to do it.

Regards,
Harshad.
 
N

Nick Keighley

    I have been searching the net for a possible solution but not able
to find a concrete solution for this.
I am trying to convert a long value of UTC seconds (1278360000) to
local time.. I could not find a a way to do it.

divide by 60 to get minutes etc.
 
N

Nick Keighley

divide by 60 to get minutes etc.

pseudo code

(define (seconds->time secs)
(let* (
(minutes (quotient secs 60))
(seconds (remainder secs 60)) )
(let* (
(hours (quotient minutes 60))
(minutes (remainder minutes 60)) )
(list hours minutes seconds) )))

a quick test:-

(seconds->time (+ 3600 120 3))
(1 2 3)
 
I

Ike Naar

I have been searching the net for a possible solution but not able
to find a concrete solution for this.
I am trying to convert a long value of UTC seconds (1278360000) to
local time.. I could not find a a way to do it.

The <time.h> header has a couple of functions that might help
solve your problem, e.g. ctime() or localtime().
 
V

Vincenzo Mercuri

Harshad ha scritto:
Hi,
I have been searching the net for a possible solution but not able
to find a concrete solution for this.
I am trying to convert a long value of UTC seconds (1278360000) to
local time.. I could not find a a way to do it.

Regards,
Harshad.


Maybe this may help you:


#include <time.h>
#include <stdio.h>

int main(void)
{
const time_t calendar_time = (time_t) 1278360000;
struct tm *broken_form = localtime( &calendar_time );

printf("\nUTC time and date: %s\n", asctime(broken_form));

return 0;
}


Regards
 
V

Vincenzo Mercuri

Vincenzo Mercuri ha scritto:
Harshad ha scritto:


Maybe this may help you:


#include <time.h>
#include <stdio.h>

int main(void)
{
const time_t calendar_time = (time_t) 1278360000;
struct tm *broken_form = localtime( &calendar_time );

printf("\nUTC time and date: %s\n", asctime(broken_form));


replace "UTC" with "Local"
 
A

Alexander Klauer

Vincenzo said:
Harshad ha scritto:
I have been searching the net for a possible solution but not able
to find a concrete solution for this.
I am trying to convert a long value of UTC seconds (1278360000) to
local time.. I could not find a a way to do it.

Maybe this may help you: [...]
const time_t calendar_time = (time_t) 1278360000;
[...]

How the value of a time_t encodes the time is unspecified (N1256 7.23.2.4,
2nd paragraph).

Harshad, I'm afraid you're on your own; the standard library functions
aren't much help here.
 
B

Ben Bacarisse

Harshad said:
I have been searching the net for a possible solution but not able
to find a concrete solution for this.
I am trying to convert a long value of UTC seconds (1278360000) to
local time.. I could not find a a way to do it.

If this is coursework then the hints you've had about doing it yourself
might be enough to get you going, but if this is a real world problem
then re-inventing the wheel is not the right way to go. It is a complex
problem and doing it right is much harder than the hints you've had
suggest.

C's standard time functions don't make any assumption about the "epoch"
(the time that corresponds to a second count of zero) but other
standards do. POSIX defines it to be midnight on Jan 1st 1970 and if
this is your epoch then functions exist on almost every platform to
convert second counts into calendar dates directly. If you have a
different zero time, simple arithmetic can be used to convert from one
epoch to another.

You can't use this sort of conversion approach with standard C functions
because the standard does not give enough guarantees about how time_t
values represent times.
 
V

Vincenzo Mercuri

Richard Heathfield ha scritto:
Would it be legal on a Posix compliant environment (given that
1278360000 is in Unix time) ?

Or it is just not allowed to make assumptions on type time_t in general?
 
V

Vincenzo Mercuri

Eric Sosman ha scritto:
POSIX systems place more requirements on time_t and its
representation than C itself does. If you're writing for POSIX
systems, you can rely on POSIX' extra guarantees (and extra
capabilities). See comp.unix.programmer for more information.

Actually I am a bit confused about one thing:
at what degree a C compiler is indipendent from the
operating system it is running on? I mean, for example,
the time_t type is implemented in gcc or in glibc? gcc
compiler (but i may think about other compilers)
completely relies on glibc or it has its own
indipendent static libraries with different requirements
from Posix (or Windows)?
Anyway I am going to post such questions into
comp.unix.programmer as well.

Thank you, whatever your answer will be.
 
A

Alexander Klauer

Richard said:
Not so right. See my parallel reply. Once you've established the base
date for UTC seconds, you can stuff that into a struct tm. Having got
your UTC count down into manageable quantities (everything fits in an
int), you can use mktime to add it to your base date, and everything is
now hunky-dory (or, at the very least, dory).

I've read your post, but how exactly does mktime() enter there? I mean, all
the adding is done manually in a struct tm, is it not? Or do you mean a
normalisation step (mktime() followed by a localtime()) after the addition?
So if you have your normalised base date in, say, struct tm bd, and your
UTC seconds in long utc_seconds, you do

bd.tm_mday += utc_seconds / 86400L;
bd.tm_min += (utc_seconds % 86400L) / 60L;
bd.tm_sec += utc_seconds % 60;

(assuming 0 <= utc_seconds < (INT_MAX - 30)*86400) and then use mktime()
followed by localtime() to normalise?
 
E

Eric Sosman

Eric Sosman ha scritto:

Actually I am a bit confused about one thing:
at what degree a C compiler is indipendent from the
operating system it is running on? I mean, for example,
the time_t type is implemented in gcc or in glibc? gcc
compiler (but i may think about other compilers)
completely relies on glibc or it has its own
indipendent static libraries with different requirements
from Posix (or Windows)?

We have to distinguish between the cross-platform definition
of C and the realizations of C on a particular platform or family
of platforms.

The Standard describes the features of C that are common to
all platforms. It also describes which features are optional (e.g.,
int36_t is not required to be present), and the permitted variations
among features (e.g., whether char is signed or unsigned). But,
after taking the allowed freedoms into account, the Standard describes
"fully portable C."

Any particular implementation of C will, for starters, make the
choices the Standard has left open: The underlying type of time_t,
the numeric value that encodes 'x', and so on. It is also likely
to add further specifications: The allowed syntax for file names in
fopen(), how time_t is encoded, the meanings of exit() arguments
beyond those specified in the Standard, and the like. It may add
features not mentioned in C's definition, like memory-mapped files
and network facilities. The result is a "platform-specific C."

It is up to you as a programmer to find an appropriate balance
between "fully portable" and "platform-specific" characteristics,
and the choices will be different for different projects. You may
write a piece of code that does something highly specific to the
Frobozz Magic C implementation on the DeathStation 9000, and is of
interest only in FMC/DS9K environments: In that case, there's little
reason to avoid using FMC/DS9K-specific features. On the other hand,
you might write code that parses the command-line arguments of main()
in a certain way, and you'd like to use this code in many programs on
many environments: In that case, you'd be well-advised to stick as
closely to "fully portable" C as you can. And there are intermediate
positions, too: You can write for a platform family (like POSIX or
Win32) that provide features available in several environments but
aren't necessarily present in C implementations outside the family.

As for whether time_t is a creature of the compiler or of the
library, the Standard's view is that both the compiler and the library
are part of "the implementation," so the question isn't answerable.
The language is the language, and it includes the library (except for
free-standing implementations, which I'll just avoid talking about).

But again, when you get to a particular realization of C on a
particular platform, the division between compiler and library gets
clearer. In the case of gcc/glibc, it is almost certainly glibc's
business to choose an appropriate representation for time_t and to
"publish" its choice in <time.h> so the compiler can then know how
to deal with it. But some fuzziness quite likely remains: If you
call sqrt(), for example, "compiler magic" may generate a square-
root instruction right there in the middle of your code, rather
than generating a call to a square-root function in the library.
There's fuzziness the other way, too: Some machines lack hardware
support for integer division, so `m / n' may wind up as a call to
a hidden library function. And there's some areas where the library
and the compiler must "conspire" to get the right effect: setjmp()
and longjmp(), for example, are probably not completely library- nor
completely compiler-implemented.

So: When you ask whether thus-and-such a feature is a creature
of the compiler or of the library, (1) from the fully portable point
of view the question makes no sense, and (2) from the point of view
of a particular implementation or family of implementations the
question is probably answerable, at least in part.
 
E

Ersek, Laszlo

(c.u.p. added)

POSIX systems place more requirements on time_t and its
representation than C itself does. If you're writing for POSIX
systems, you can rely on POSIX' extra guarantees (and extra
capabilities). See comp.unix.programmer for more information.

(In the specific matter of timekeeping, it's my understanding
that POSIX *requires* a degree of inaccuracy by *defining* the day
as 86400 seconds, leap seconds be damned. POSIX time is therefore
twenty-four seconds off of UTC, and drifting slowly. But perhaps
I've not understood the issues properly; again, comp.unix.programmer
is a better place to ask. The C Standard does not require this
inaccuracy -- but then, the C Standard does not require any stated
degree of accuracy, either.)

I think you are right.

"Seconds Since the Epoch", normative:

http://www.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_15

Informative (rationale):

http://www.opengroup.org/onlinepubs/9699919799/xrat/V4_xbd_chap04.html#tag_21_04_15

My interpretation:

IIUC, POSIX introduces its own "seconds since the Epoch" (= scalar) and
"UTC name" (= broken-down) definitions, and the strict correspondence
between them.

If one takes the current time (looking at his wristwatch) and considers it
a "UTC name", then the corresponding POSIX "seconds since the Epoch" value
is only an approximation of the number of seconds that have in fact
elapsed since the Epoch, in the real world. (This is the mktime()
direction.)

Conversely, the gettimeofday() -> localtime() chain should work
intuitively; the system ensures that gettimeofday() fills in such a value
that (1) has unspecified relation to the real-world number of seconds
elapsed since the Epoch, (2) makes localtime() return with a broken-down
"UTC name" that matches the wall-clock date and time of the gettimeofday()
call, and (3) is suitable for checking differences (under newer versions,
clock_gettime() with CLOCK_MONOTONIC should be used for that end).

The "wall-clock" and the "wristwatch" in the above is reset when Daylight
Saving Time requires it, and, more surprisingly, can display 61 seconds
(and 62, up to and including SUSv2).

(End of my interp.)

Returning to the original question, the cited SUSv4 Rationale says:

----v----
[...] it is important that all conforming systems interpret "536457599
seconds since the Epoch" as 59 seconds, 59 minutes, 23 hours 31 December
1986 [...]
----^----

The "seconds since the Epoch" expression is quoted in the above, thus it
should refer to the POSIX definition (ie. time_t), not the real-world
positive number.

lacos
 
N

Nobody

Actually I am a bit confused about one thing:
at what degree a C compiler is indipendent from the
operating system it is running on? I mean, for example,
the time_t type is implemented in gcc or in glibc?

In the library, e.g. glibc (Linux) or MSVCRT (Windows).

The standard doesn't directly define this, but a reasonable heuristic is
that the compiler implements anything which is required of a freestanding
implementation, while the library implements the additional features
required for a hosted implementation.

In practice, this isn't entirely true. Calls to certain functions may be
implemented as inline code by the compiler if this can enable significant
optimisation. This is often the case for memcpy(), and also for many of
the <math.h> functions, if the CPU/FPU provides a corresponding
instruction.

Compilers typically have a way to disable this behaviour in case you need
to generate a call to an external function (e.g. for profiling).
 
V

Vincenzo Mercuri

Eric Sosman ha scritto:
We have to distinguish between the cross-platform definition
of C and the realizations of C on a particular platform or family
of platforms.

The Standard describes the features of C that are common to
all platforms. It also describes which features are optional (e.g.,
int36_t is not required to be present), and the permitted variations
among features (e.g., whether char is signed or unsigned). But,
after taking the allowed freedoms into account, the Standard describes
"fully portable C."

Any particular implementation of C will, for starters, make the
choices the Standard has left open: The underlying type of time_t,
the numeric value that encodes 'x', and so on. It is also likely
to add further specifications: The allowed syntax for file names in
fopen(), how time_t is encoded, the meanings of exit() arguments
beyond those specified in the Standard, and the like. It may add
features not mentioned in C's definition, like memory-mapped files
and network facilities. The result is a "platform-specific C."

It is up to you as a programmer to find an appropriate balance
between "fully portable" and "platform-specific" characteristics,
and the choices will be different for different projects. You may
write a piece of code that does something highly specific to the
Frobozz Magic C implementation on the DeathStation 9000, and is of
interest only in FMC/DS9K environments: In that case, there's little
reason to avoid using FMC/DS9K-specific features. On the other hand,
you might write code that parses the command-line arguments of main()
in a certain way, and you'd like to use this code in many programs on
many environments: In that case, you'd be well-advised to stick as
closely to "fully portable" C as you can. And there are intermediate
positions, too: You can write for a platform family (like POSIX or
Win32) that provide features available in several environments but
aren't necessarily present in C implementations outside the family.

As for whether time_t is a creature of the compiler or of the
library, the Standard's view is that both the compiler and the library
are part of "the implementation," so the question isn't answerable.
The language is the language, and it includes the library (except for
free-standing implementations, which I'll just avoid talking about).

But again, when you get to a particular realization of C on a
particular platform, the division between compiler and library gets
clearer. In the case of gcc/glibc, it is almost certainly glibc's
business to choose an appropriate representation for time_t and to
"publish" its choice in <time.h> so the compiler can then know how
to deal with it. But some fuzziness quite likely remains: If you
call sqrt(), for example, "compiler magic" may generate a square-
root instruction right there in the middle of your code, rather
than generating a call to a square-root function in the library.
There's fuzziness the other way, too: Some machines lack hardware
support for integer division, so `m / n' may wind up as a call to
a hidden library function. And there's some areas where the library
and the compiler must "conspire" to get the right effect: setjmp()
and longjmp(), for example, are probably not completely library- nor
completely compiler-implemented.

So: When you ask whether thus-and-such a feature is a creature
of the compiler or of the library, (1) from the fully portable point
of view the question makes no sense, and (2) from the point of view
of a particular implementation or family of implementations the
question is probably answerable, at least in part.

Thank you. I understood your explanation.
It couldn't be clearer. I think I owe you
very much lately. It's so difficult for me
to search for answers like this by myself
and maybe i would get nowhere just by reading
my books. That's why I consider your help inestimable.
 
V

Vincenzo Mercuri

Nobody ha scritto:
In the library, e.g. glibc (Linux) or MSVCRT (Windows).

The standard doesn't directly define this, but a reasonable heuristic is
that the compiler implements anything which is required of a freestanding
implementation, while the library implements the additional features
required for a hosted implementation.

Thanks. Do you know of some option that shows what library we are
linking against during the 'compilation'?
In practice, this isn't entirely true. Calls to certain functions may be
implemented as inline code by the compiler if this can enable significant
optimisation. This is often the case for memcpy(), and also for many of
the<math.h> functions, if the CPU/FPU provides a corresponding
instruction.

Compilers typically have a way to disable this behaviour in case you need
to generate a call to an external function (e.g. for profiling).

And this is exactly what I mean to do!
 
S

Steve Allen

I am trying to convert a long value of UTC seconds (1278360000) to
local time.. I could not find a a way to do it.

"local time" means that which is valid in your jurisdiction?
In that case the legal answer may be more complex than anyone wants.
The amount of time legally elapsed differs from one country to
another.
See the problem with the international interpretations here

http://www.ucolick.org/~sla/leapsecs/epochtime.html
 
N

Nobody

Thanks. Do you know of some option that shows what library we are
linking against during the 'compilation'?

Not directly. gcc has the -dumpspecs switch which dumps the rules which
determine which options are implied by other options. There's also
-nostdlib, -nostartfiles and -nodefaultlibs, which inhibit linking the
default libraries and CRT files. Or you can skip gcc and invoke ld
directly.
 
D

David Thompson

Start off by finding out how many leap years you have. There are (365 *
4) + 1 = 1461 days in a leap year, 86400 * 1461 seconds in a leap year.
(Corrected to group of four years including leap.)
86400 * 1461 is 126230400. (I am ignoring leap seconds.)
If the user(s) accept GMT seconds instead of UTC, and most people are
just repeating buzzwords and haven't a clue there's any difference,
you're set. (Some people could even present it as patriotic. said:
1278360000 / 126230400 = 10 remainder 16056000.

So you have 10 lots of four years, plus a number of seconds remaining.

Superpedantry: this assumes the common Western (Gregorian) calendar.
There are and have been other calendars, with different rules.
Though admittedly not in Usenet or (mostly?) Internet standards.
For that matter, it assumes 'local' on Earth; we don't (yet?!) know
what scheme(s) will be used for other planets, bodies, or free space.

This doesn't quite work if you cross a century non-4century year. (We
were lucky(?) that practical computing arrived so near the (Christian)
second millenni-end.) It's possible to do the 146097-day period, but I
prefer when applicable to just test after and do the small correction.
For the common Unixy 32-bit-signed-long from 1970 this is not an issue
because you can only reach 1901 to 2038 .
There are 86400 seconds in a day, so that's

16056000 / 86400 = 185 days, leaving 72000 seconds (20 hours).

So all you have to do is decide which local time 0 UTC seconds
translates to, and add 40 years 185 days 20 hours 0 minutes 0 seconds to
that time. Be careful, though - if base date + days added (not including
whole numbers of four years) takes you past a leap year boundary, you
need to adjust accordingly.

Per above, not quite every four years has a leap, so you (may) need to
do that the other way: first do base + fouryearses and if then adding
residual crosses a leap, adjust. And remember in direct Gregorian the
boundary occurs within the year; it actually makes the code a little
simpler to compute from a March origin and then adjust at the end.
Also the base could have a nonzero time of day, like the astronomical
Julian Day origin at noon GMT~UTC.

Plus adjust for daylight/summer time as applicable, if that's included
in your definition of 'local' -- for most people (and apps) it is,
even though I don't consider it inherent. And the 'rules' for that are
a nightmare. The best solution is to just use Olson; usually it's
right and if it's not you've got plenty of company.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,954
Messages
2,570,116
Members
46,704
Latest member
BernadineF

Latest Threads

Top