When I started programming, I was told that C++ integer (int) is always as wide as processor. On 16-bit processors it was two bytes, on 32-bit processors it was four bytes. Reason was optimization. This was very simplified view since C++ only said short <= int <= long. However, it was true enough. Currently there is not one but few 64-bit data models in use. You as developer do not have say on which model to use. Choice is made once you decide on which operating system your application is to be executed. If you use Windows you will use LLP64 data model (16-bit short, 32-bit int, 32-bit long, 64-bit pointer). With Linux as platform of your choice LP64 is used (16-bit short, 32-bit int, 64-bit long, 64-bit pointer). Difference is small, but it can make life difficult if you develop for multiple platforms.
Fortunately in most of modern managed environments (e.g. .NET, Java...) data type size is fixed. I find this much better.
That’s why you normally typedef your own data types, for example int16, int32, int64 and so on. And use preprocessor conditions to define these datatypes according to the target platform. For example, with Visual Studio you could typedef them do __int16, __int64 and so on.
When your compiler is C99 compliant, unfortunately, the MS compiler isn’t yet :(, you can use int where N is the size you want.