Programming C, C++, Java, PHP, Ruby, Turing, VB
Computer Science Canada 
Programming C, C++, Java, PHP, Ruby, Turing, VB  

Username:   Password: 
 RegisterRegister   
 The 2038 Syndrome: Second Coming of Y2K
Index -> Programming, C -> C Help
Goto page Previous  1, 2, 3, 4  Next
View previous topic Printable versionDownload TopicSubscribe to this topicPrivate MessagesRefresh page View next topic
Author Message
Insectoid




PostPosted: Tue Apr 23, 2013 2:01 pm   Post subject: RE:The 2038 Syndrome: Second Coming of Y2K

Your compiler doesn't care if you go over 2038. Your program might not have any issues at all, depending on what you're using the dates for. Then again, depending on what you're using the dates for, your program may crash or give unexpected output. It's a logic error, not a syntax error. Compilers don't care about logic, only syntax.
Sponsor
Sponsor
Sponsor
sponsor
randint




PostPosted: Tue Apr 23, 2013 7:53 pm   Post subject: RE:The 2038 Syndrome: Second Coming of Y2K

Maybe you casted the time_t into a long and that is why it can go beyond 2038.
randint




PostPosted: Fri Jul 19, 2013 12:43 pm   Post subject: Re: The 2038 Syndrome: Second Coming of Y2K

My computer does not suffer from 2038 syndrome! Yay!

Terminal Output (Ubuntu 13.04, 64-bit) Perl script:

code:

Tue Jan 19 03:14:01 2038
Tue Jan 19 03:14:02 2038
Tue Jan 19 03:14:03 2038
Tue Jan 19 03:14:04 2038
Tue Jan 19 03:14:05 2038
Tue Jan 19 03:14:06 2038
Tue Jan 19 03:14:07 2038
Tue Jan 19 03:14:08 2038
Tue Jan 19 03:14:09 2038
Tue Jan 19 03:14:10 2038


from this code
Perl:

#!/usr/bin/perl
#
# I've seen a few versions of this algorithm
# online, I don't know who to credit. I assume
# this code to be GPL unless proven otherwise.
# Comments provided by William Porquet, February 2004.
# You may need to change the line above to
# reflect the location of your Perl binary
# (e.g. "#!/usr/local/bin/perl").
# Also change this file's name to '2038.pl'.
# Don't forget to make this file +x with "chmod".
# On Linux, you can run this from a command line like this:
# ./2038.pl
use POSIX;
# Use POSIX (Portable Operating System Interface),
# a set of standard operating system interfaces.
$ENV{'TZ'} = "GMT";
# Set the Time Zone to GMT (Greenwich Mean Time) for date calculations.
for ($clock = 2147483641; $clock < 2147483651; $clock++)
{
    print ctime($clock);
}
# Count up in seconds of Epoch time just before and after the critical event.
# Print out the corresponding date in Gregorian calendar for each result.
# Are the date and time outputs correct after the critical event second?
btiffin




PostPosted: Fri Jul 19, 2013 5:09 pm   Post subject: Re: The 2038 Syndrome: Second Coming of Y2K

randint;

If only.

64 systems may (by and large) escape the epoch... (Gee, I know Tony dreads the puns, but I think that was an epoch ellipse) ...but not in the all. Not all software will successfully clock over the threshold.

You need to know for certain that no code is using 32 bit seconds-since signed or unsigned integer time fields. That would mean verifying ALL sources that use time as integer, not just modern Perl, Python, Ruby, et al, libraries. Timers will likely prove the most hazardous, future value calculations might count as second most.

Ancient embedded systems will probably be the most fun to remediate. No sources, hard to find EPROM readers, hard to find chip specs, etcetera.

Anyone wanting to increase odds of scoring 'gold rush' work in 20 to 25 some years, might do well to read up on 8 bit processors like the Z-80 and 80C554 micro controllers.

Cheers
btiffin




PostPosted: Fri Jul 19, 2013 5:45 pm   Post subject: Re: RE:The 2038 Syndrome: Second Coming of Y2K

mirhagk @ Sun Apr 21, 2013 7:38 am wrote:
It won't be 70 years though, it'll be 140, because it'll go negative 70 years.


Good point, but not quite right, and highlights how this can be a misleading problem with devils in the details. And thinking about it;

0 will be Jan 1970, -1 will be one second back from 1970, etc. zero though, resets to software assuming 1970. The magic 2 billion seconds mark (2147483648), signed ints wrap, and start counting down from -1, in most software systems which would assume 2's complement form, and that isn't the only representation that may be in use, in say an 8 bit elevator micro controller.

Anyone here that thinks that elevators aren't installed with 30 year life expectancies, or planes don't have life expectancies in the 50 year range, or nuclear reactors, hydro dams, and on and on, is not seeing the world's infrastructure for what it is. Old, mostly hidden, and only fixed when it breaks, explodes or drops on someone's head.

I'm 50 and I still have a few years left to make mistakes. I still get calls supporting software that is 30 years old with no sign of being replaced. (It's been virtualized off a Vax, but is still running - and yes it has epoch issues, even after a very expensive y2k remediation pass).

Humans learn, then rinse and repeat. Banks are already using forms with 2 digit years.

I arm waved for y2k through the 90's, and the thing that I learned to fear most was out of phase power generation. y2k had nothing to do with power plants; micro controllers don't count time in human string form. The epoch issue does influence power plants, and it'll need to be verified. Out of phase power grids are scary. So scary that I bet humans will panic before epoch ellipse, spend way too much money on mostly clown trained technicians and will fix and retrofit around the problem. I believe the will fix part. Humans want to live, and keep their hoards intact.

So, advice from an old guy. Don't be a 40 year old clown come 2038; understand the problem, understand the binary representations, be able to talk about it with professionalism and correctness. If only to calm the nerves of your grandma when the hype hits in 2035.

Umm, not to sound like I know anything technical about epoch. But I doubt it'll be my job to fix. I'll be 75, and bitching at you kids to fix all the flashing clocks, then yelling for everyone to get off my lan.

Cheers
randint




PostPosted: Fri Jul 19, 2013 8:27 pm   Post subject: RE:The 2038 Syndrome: Second Coming of Y2K

Actually, my computer was purchased 2 months ago from Dell. It looks like it is a 2013 model. No ridiculously old hardware can be found. Plus, I said Ubuntu 13.04, x64.

Just wondering, does "sudo apt-get install <package>" ever install 32 bit software on a 64 bit computer?
nullptr




PostPosted: Sun Jul 21, 2013 10:04 pm   Post subject: Re: The 2038 Syndrome: Second Coming of Y2K

I must be misunderstanding something. I thought that 32- or 64-bit systems referred to the size of the pointers used, not the size of integers themselves. For example, my computer is 64-bit but when I declare an int in C++, it still takes up only 32 bits. Doesn't that mean that the 2038 problem will be independent of whether systems are 32- or 64-bit?
randint




PostPosted: Sun Jul 21, 2013 10:34 pm   Post subject: Re: The 2038 Syndrome: Second Coming of Y2K

I must say that I know very little, if nothing at all, about C/C++. I know Java (a C-like language). In Java, the keywords for primitive types are like this:

boolean: only true or false, cannot be converted to, or convert from any other primitive type to this type
byte: 8-bit signed integer ranging from -(2^7) to 2^7 - 1
char: 16-bit unsigned integer ranging from 0 to 2^16 - 1
short: 16-bit signed integer ranging from -(2^15) to 2^15-1
int: 32-bit signed integer ranging from -(2^31) to 2^31-1
float: 32-bit IEEE-754 signed floating point (decimal) number...I do not know the range
long: 64-bit signed integer ranging from -(2^63) to 2^63 - 1
double: 64-bit IEEE-754 signed floating point (decimal) number...again, I do not know the range

Anyhow, the idea of Java is that, when you do the following:
Java:
int a = 0;

you are declaring a 32-bit signed integer (the only unsigned integer type in Java is char, there is no way to do "unsigned int" like you do in C++).

I hope someone with the expertise can help confirm if anything I wrote here are correct. Also, Java works in a VM, whereas C++ runs natively on the system.
Sponsor
Sponsor
Sponsor
sponsor
DemonWasp




PostPosted: Mon Jul 22, 2013 12:11 am   Post subject: RE:The 2038 Syndrome: Second Coming of Y2K

Unfortunately, the issue is a bit more complicated than has been discussed so far:

Operating Systems
The "bitness" of an operating system refers to how large of an address space it can address -- that is, how much space it can give pointers to. On 32-bit systems, the limit is of course 2^32 (disregarding Physical Address Extensions [PAE] which is a whole other can of worms). On 64-bit systems, it can vary (the upper limit is 2^64, but current versions of operating systems are often built to lower limits).

The size a pointer uses is related, but not necessarily the same: 64-bit systems can use 32-bit pointers as well as 64-bit pointers. This is partially for backwards-compatibility, so that a 64-bit system can run 32-bit binaries. It can also allow 64-bit-aware code to use 32-bit pointers to save space (the Java Virtual Machine, JVM, can do that automatically).

64-bit Linux systems can install a mix of 32-bit and 64-bit software, and apt and aptitude (as well as probably all the other popular package managers) can handle resolving dependencies correctly so that they both have the libraries they need. Some distributions have made a concerted effort to make all their software available in 64-bit, but even so some software just isn't 64-bit yet. Using 32-bit software on a 64-bit system is seamless, so I'm not even sure which of the programs I use are 32-bit.

But that's just pointers, not integer storage.

Integers
Languages vary in how they store integer data. Java has very well-defined types. Python has arbitrary-length integers, meaning they can store any value. C and C++ have less well-defined base types; for example, 'int' means "at least as many bits as there are in 'short' and at most as many as there are in 'long int'". You can imagine that since 'long' and 'short' are defined similarly, there are no set sizes for types in C or C++. Compilers are free to define what 'int' and 'short' and so forth actually mean.

This is deliberate, as systems are free to define their memory model; most consumer systems have converged on 8 bits to a byte, 4 or 8 bytes to an int, etc (mostly, the Java model that randint posted). But some integrated systems (which are everywhere, but largely invisible) use variations, usually something like "2 bytes to an int" but sometimes some more exotic options. The whole matter becomes stupidly complicated when you add in character sets, which are what is used to interpret the bits inside a char into a character representation.

The Epoch
The only way to be sure that software is capable of handling dates past the 32-bit epoch is to either prove that mathematically (incredibly time-consuming, and difficult enough that you'd need a lot of expensive people to do so) or to set the system's time past the epoch and try every supported code path, verifying that dates are never mangled (which is also very difficult, as you'd have to recognize dates that have been mangled). It would be easy for a human tester to miss a mangled date while visually scanning, and there may not be a good automated solution.

That means that the program source code (often millions of lines), database design (relatively small, but equally complex), and libraries (which may not even have the original source available!) need to be checked for compatibility.

This applies to ALL operating programs that interact with dates, either directly or indirectly. Every database, every integrated system, every operating system, every library, every eggbeater calibration routine. Imagine looking over all the code you've ever written, looking for very subtle bugs. Now, imagine that someone else wrote it twenty years ago, didn't comment it, and has since retired. That is an enormous undertaking.

Probably the more common solution, at least for non-essential software, will be "well, I hope it doesn't break" followed by "well, it broke, let's fix it".

@randint:
You're mostly correct, but Java is only C-like in appearance. Its design philosophy, specification, ecosystem, tooling, and applications are very different.

The technical range of floating-point numbers isn't necessarily representative of the range over which they are most useful. Floating-point values are most densely packed around 0, and become increasingly sparse towards the positive and negative ends of the range. For very large numbers in Java, use BigInteger or BigDecimal, which are arbitrary-precision.
btiffin




PostPosted: Mon Jul 22, 2013 3:07 pm   Post subject: RE:The 2038 Syndrome: Second Coming of Y2K

Great conversation people. These are the kinds of details anyone that wishes to speak to the epoch ellipse with any level of authority will need to know.

Otherwise they'll just be another chicken little (and don't worry, as the date gets closer, you'll see a LOT of chicken littles running around in a panic, not even realizing that they still have their heads).

Cheers
randint




PostPosted: Mon Jul 22, 2013 8:00 pm   Post subject: Re: The 2038 Syndrome: Second Coming of Y2K

@DemonWasp:
I am a little scared when you described C/C++ as very non-standard(ized) languages. Here is a thing: if I have a program written in C/C++, and it compiles perfectly well under one compiler, but when I compile it with a different compiler, it gives a syntax error (because of the "different" [and different is bad, because if a program needs to be multi-platform and work across all platforms without having to rewrite the code] way of implementing a language).

Another scary thing is that you mentioned something about code written so long ago and the person retires (or even dies) -- or worse, the source code is not available at all (I do not think there is a way to fix a program's bug if the source code is unavailable).

So, do you think that Y2038 is a lot more complicated than Y2K? (I read somewhere that Y2K is not serious because it is just the way that time is displayed on the screen, as opposed to the way that time is represented in a computer, so that when the counter reaches 2^31 for time on January 19, 2038, and if the system [or any software installed on the system] can have some strange overflow-related error).

Now, I will give my opinion: being a soon-to-be math student at the University of Waterloo, I believe that, a big part of this is how difficult it is to raise awareness. You see, Y2K is easy to understand: the year is represented using the last 2 digits (e.g. "1999" becomes "99" so that the rollover on January 1, 2000 would look like "00", seemingly the same as "1900" [also represented with "00", and therefore the root cause of date confusion]). Someone who fails to comprehend (or had forgotten about) high school level math (exponential functions, etc...) would have a very hard time understanding the difference between 32-bit and 64-bit (they think that 64-bit is twice as powerful as 32-bit when in fact (2^32)^2 = 2^64=18446744073709551616, as opposed to 2^32=4294967296. If you come to think about it, someone who does not have a reasonably good understanding of such math is unlikely to understand why January 19, 2038 is the doomsday of 32-bit computing. I really hope that more people are aware of the serious implications this have on anything that relies on computers (e.g. as above posters have mentioned, power plants, elevators are likely to stop working when the Epoch hits).
DemonWasp




PostPosted: Mon Jul 22, 2013 11:30 pm   Post subject: RE:The 2038 Syndrome: Second Coming of Y2K

There are established standards for C and C++ (and several iterations of each standard), but they leave a lot of details unspecified, usually deliberately. You may have heard of "unspecified behaviour"? It's used as a catch-all in C and C++ standards, meaning "this is unsupported, so don't do it--your compiler is allowed to do whatever it wants if you do this!".

It is frequently the case that compiler A and compiler B will disagree about whether any given program is valid, and perhaps more vexingly, what that program means. Even if you can get A and B to both compile your program, they may compile different behaviour. One topical example would be that int may mean 32 bits to compiler A, but only 16 bits to compiler B. If you're storing numbers above 2^15-1, then you're in for a rude surprise when your code doesn't work on compiler B, even though it compiles fine and worked on compiler A.

This is, as you mention, a problem for cross-platform development. In real software, a few header files are usually used to smooth over these differences. Developers will usually target specific choices of compiler and platform ("gcc on linux, vc++ on windows", etc) and only really test those combinations.

In the case of programs where the source is no longer available, it might be possible to patch the compiled binaries. However, that's both difficult and complex, and may well require more effort than just rewriting the program.

I would argue that public awareness of the epoch issue isn't really necessary. Of course software developers and engineers should be aware of their issue, as should their managers. The people who are in charge of maintaining computers and software need to know. However, most people use computers for "the Facebook" or videogames or business applications, and for the most part don't need to know.

Public scaremongering over this issue is pretty well worthless. While it'll be expensive for many companies to audit themselves for "epoch compliance" and generate some harrowing overtime experiences for a few software developers, the vast majority of people don't need to know.
btiffin




PostPosted: Thu Jul 25, 2013 9:24 am   Post subject: RE:The 2038 Syndrome: Second Coming of Y2K

DemonWasp may be one of the rare voices of reason in the wilderness, so, you know:

Don't listen to him. Run around, and panic

Or, learn a little, and take advantage of the boom time that is coming. And prepare for the year or two of bust around 2040 as corporations readjust IT funding.

And by advantage, I don't really mean 'take advantage', I mean be prepared to do some remediation work with enough know how to calm nerves and lead teams without wasting too too much of the company's monies.

It'll be lucrative for a few, an opportunity to sell snake oil for a few others and for most, just another job.

But, why not scare grandma in the meanwhile. She's old anyway and it serves her right for not learning how to program in POSIX.

Cheers
randint




PostPosted: Sun Jul 28, 2013 10:47 pm   Post subject: Re: The 2038 Syndrome: Second Coming of Y2K

Unfortunately, people like my father actually looks at things differently. He said "Yeah, so what? Even if 6 billion people dies in this world due to the lack of electricity, the world did not end, 1 billion still remain. " (I told him that there may be no electricity after January 19, 2038 due to this debilitating fatal incurable syndrome). (And yet, our household has 2 laptop computers, 1 iPad, 1 Windows Phone, 1 Blu-ray player, 1 Android phone and 1 "regular" cellphone).

No, the older generation will not give a **** about losing electricity (and it did seem to me that my parents lived without electricity for much of their times when they were kids - may)be that is the reason?) I do not know, but they do not seem to be worried...even though I think cars cannot run without electricity (you need a battery to start the car).
mirhagk




PostPosted: Mon Jul 29, 2013 9:31 am   Post subject: RE:The 2038 Syndrome: Second Coming of Y2K

Electricty won't be lost, big systems will all be fixed, I think the actual effects to happen would be that embedded devices in non-critical areas will not be fixed. You may notice that your dishwasher will stop functioning for instance, or your printer or maybe even tv (although those are getting more advanced, and a software upgrade will probably be very possible by then).

An interesting solution would be allowing every device to receive firmware upgrades. I saw something a little while ago about some company considering doing this for laundry machines for example. If there was ever a problem they could simply push out a new firmware update without the user even knowing what was happening. It could prevent them from having to do a recall and even issues that would be too minor for a recall, but still major enough that users might be upset about it (and they'd lose future sales).

We can force this to happen by writing some really awesome code for laundry machines, then just release it under GPLv3 (which forces them to allow the user to modify code, open source fridges FTW!)
Display posts from previous:   
   Index -> Programming, C -> C Help
View previous topic Tell A FriendPrintable versionDownload TopicSubscribe to this topicPrivate MessagesRefresh page View next topic

Page 3 of 4  [ 49 Posts ]
Goto page Previous  1, 2, 3, 4  Next
Jump to:   


Style:  
Search: