I read through several online articles to understand this a little better and from what I've gathered, this is already being done:
The size of the byte has historically been hardware dependent and no definitive standards exist that mandate the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. Many types of applications use variables representable in eight or fewer bits, and processor designers optimize for this common usage.
Though this is from Wiki, I think it will suffice as reasoning, unless some other techno wizard gets on here and can explain it better.