T
Tomás Ó hÉilidhe
I'm doing low-level networking programming at the moment writing my
own Ethernet frames, so I start off with the Destination MAC address,
then Source MAC address, then Protocol ID, then I have the IP packet,
then the UDP segment, and so forth.
The networking library I'm using is called Berkeley Sockets; I've
decided to go with it because I hear it's the most ubiquitous
networking library across all platforms.
Anyway, although I want my program to be as portable as possible, I
realise that it will only be portable to systems which have an
implemenation of Berkeley Sockets, and also which have an exact 8-Bit
type, a 16-Bit type and a 32-Bit type (all without padding). I get
these types from stdint.h:
#include <stdint.h>
int VerifyPacketChecksum(uint8_t const *packet);
Throughout my code though, there are a few instances in which I deal
with taking 16-Bit numbers from an Ethernet frame. I know that one
possible method of doing this would be:
(p[0] << 8) | p[1]
But at the moment I have the following in my code:
ntohs( *(uint16_t const*)p )
("ntohs" is a function which converts from network byte order to host
byte order)
It's possible that "p" will not be aligned on a two-byte boundary, but
I'm wondering if I'll have a problem? I realise that the C Standard
says outright that the behaviour is undefined if alignment
requirements are not met... but seeing as how I've already made
assumptions about there being an 8-Bit, 16-Bit and 32-Bit type, would
it not be also fair to assume that I can access a uint16_t regardless
of how it's aligned?
I suppose in essence what I'm asking is as follows: On the systems
where Berkeley Sockets is implemented, and where there are exact 8-
Bit, 16-Bit and 32-Bit types, is it OK to read or write a uint16_t
from memory regardless of the alignment? The main platforms I have in
mind are Windows, Linux, Mac, Unix, Solaris, and also possible XBox360
and Playstation 3.
Or should I just go with (p[0] << 8) | p[1] to be safe?
own Ethernet frames, so I start off with the Destination MAC address,
then Source MAC address, then Protocol ID, then I have the IP packet,
then the UDP segment, and so forth.
The networking library I'm using is called Berkeley Sockets; I've
decided to go with it because I hear it's the most ubiquitous
networking library across all platforms.
Anyway, although I want my program to be as portable as possible, I
realise that it will only be portable to systems which have an
implemenation of Berkeley Sockets, and also which have an exact 8-Bit
type, a 16-Bit type and a 32-Bit type (all without padding). I get
these types from stdint.h:
#include <stdint.h>
int VerifyPacketChecksum(uint8_t const *packet);
Throughout my code though, there are a few instances in which I deal
with taking 16-Bit numbers from an Ethernet frame. I know that one
possible method of doing this would be:
(p[0] << 8) | p[1]
But at the moment I have the following in my code:
ntohs( *(uint16_t const*)p )
("ntohs" is a function which converts from network byte order to host
byte order)
It's possible that "p" will not be aligned on a two-byte boundary, but
I'm wondering if I'll have a problem? I realise that the C Standard
says outright that the behaviour is undefined if alignment
requirements are not met... but seeing as how I've already made
assumptions about there being an 8-Bit, 16-Bit and 32-Bit type, would
it not be also fair to assume that I can access a uint16_t regardless
of how it's aligned?
I suppose in essence what I'm asking is as follows: On the systems
where Berkeley Sockets is implemented, and where there are exact 8-
Bit, 16-Bit and 32-Bit types, is it OK to read or write a uint16_t
from memory regardless of the alignment? The main platforms I have in
mind are Windows, Linux, Mac, Unix, Solaris, and also possible XBox360
and Playstation 3.
Or should I just go with (p[0] << 8) | p[1] to be safe?