[ale] gcc optimization problem

D. Alan Stewart astewart at layton-graphics.com
Mon Mar 4 17:56:33 EST 2002


For some reason when I compile with the -O2 option, this function always 
returns 0.0, unless I insert some printf's, in which case it behaves normally. 
Does anyone have enough experience with gcc optimizations to guess as to 
why? (The function bit twiddles a VAX D floating point number into Intel IEEE 
double precision format.)

Float64 readVaxFloat(Float64 *input)
{
  Float64 output;

  Uint16 *in = (Uint16*) input;
  Uint16 *out = (Uint16*) &output;

  out[3] = in[0] & 0x8000;
  out[3] |= (((in[0] & 0x7F80) >> 7) + 894) << 4;
  out[3] |= (in[0] & 0x007F) >> 3;
  out[2] = (in[0] << 13) | (in[1] >> 3);
  out[1] = (in[1] << 13) | (in[2] >> 3);
  out[0] = (in[2] << 13) | (in[3] >> 3);

  return output;
}


D. Alan Stewart
Layton Graphics, Inc.
155 Woolco Dr.
Marietta, GA 30062
Voice: 770/973-4312
Fax: 800/367-8192
FTP: ftp.layton-graphics.com
WWW: www.layton-graphics.com


"As far as the laws of mathematics refer to reality, they
are not certain; and as far as they are certain, they do
not refer to reality." - Albert Einstein

---
This message has been sent through the ALE general discussion list.
See http://www.ale.org/mailing-lists.shtml for more info. Problems should be 
sent to listmaster at ale dot org.






More information about the Ale mailing list