T
toe
Assume we're working on a system where CHAR_BIT == 8.
Let's say we have a raw byte buffer in memory:
char unsigned data[112];
Within this buffer is data that you got from your network card, an
ethernet frame to be exact. An ethernet frame is laid out as follows:
First 6 octets: Destination MAC address
Second 6 octets: Source MAC address
Next two octets: Protocol
In order to analyse the ethernet frame, I was thinking that maybe we
could make an exact-size struct as follows:
struct FrameHeader {
uint8 dest[6],src[6];
uint16 proto;
};
(I realise that we'd need a special compiler that will allow us to
specify no padding between members. Also I realise we'd have to be
careful about alignment).
And then do the following:
if ( 0x800 == ((struct FrameHeader const*)data)->proto )
puts("Contains an IP packet");
So far, I believe we have two issues:
1) The alignment of "proto"
2) The byte order of "proto"
Firstly, to get around the byte order issue, I was thinking of
changing the structure to:
struct FrameHeader {
uint8 dest[6],src[6];
uint8 proto[2];
}
And then making a macro function to turn a "uint8[2]" into a "uint16"
using BigEndian:
#define OCTETS_TO_16(p) ( (uint16)*(p) << 8 | (p)[1] )
so that we could do:
if ( 0x800 == OCTETS_TO_16( ((struct FrameHeader const*)data)-
Does this sound good?
The program that's being written is a network protocol analyser. I
myself am not writing it, but I've been asked to give a little advice.
The program is being written for MS Windows, but since the person's
using a cross-platform library for networking, I think they might try
get it to compile for Linux and Mac aswell.
On these three OS's, is there any alignment requirements for integer
types, or will the program crash if we try to access a mis-aligned
integer?
Also, is endianess determined by the CPU, or is determined by the OS?
Does anyone know what the endianesses are for the common CPU's and
OS's?
Any tips appreciated.
Let's say we have a raw byte buffer in memory:
char unsigned data[112];
Within this buffer is data that you got from your network card, an
ethernet frame to be exact. An ethernet frame is laid out as follows:
First 6 octets: Destination MAC address
Second 6 octets: Source MAC address
Next two octets: Protocol
In order to analyse the ethernet frame, I was thinking that maybe we
could make an exact-size struct as follows:
struct FrameHeader {
uint8 dest[6],src[6];
uint16 proto;
};
(I realise that we'd need a special compiler that will allow us to
specify no padding between members. Also I realise we'd have to be
careful about alignment).
And then do the following:
if ( 0x800 == ((struct FrameHeader const*)data)->proto )
puts("Contains an IP packet");
So far, I believe we have two issues:
1) The alignment of "proto"
2) The byte order of "proto"
Firstly, to get around the byte order issue, I was thinking of
changing the structure to:
struct FrameHeader {
uint8 dest[6],src[6];
uint8 proto[2];
}
And then making a macro function to turn a "uint8[2]" into a "uint16"
using BigEndian:
#define OCTETS_TO_16(p) ( (uint16)*(p) << 8 | (p)[1] )
so that we could do:
if ( 0x800 == OCTETS_TO_16( ((struct FrameHeader const*)data)-
proto ) )puts("Contains an IP packet");
Does this sound good?
The program that's being written is a network protocol analyser. I
myself am not writing it, but I've been asked to give a little advice.
The program is being written for MS Windows, but since the person's
using a cross-platform library for networking, I think they might try
get it to compile for Linux and Mac aswell.
On these three OS's, is there any alignment requirements for integer
types, or will the program crash if we try to access a mis-aligned
integer?
Also, is endianess determined by the CPU, or is determined by the OS?
Does anyone know what the endianesses are for the common CPU's and
OS's?
Any tips appreciated.