Harvard vs Von Neumann
The first choice you make when you are designing a computer is if you want to keep the program and the program's memory in the same place.
This has profound implications for what the program can do and how the computer executes instructions.
Harvard architecture: have the program in a physically different location than its working memory.
Harvard is more complicated electrically (at least for me); we need to load the program from one place but allow it to modify memory in another place. This has an enormous benefit, because we can read instructions without load on the data bus. However, for our 4-bit single instruction computer, it will complicate the project for no reason.
Von Neumann architecture: the program and working memory are in the same place.
It is incredibly elegant, it allows us to have self-modifying programs trivially. There are also deeper reasons why we should not separate the program from its data. We just have to somehow put our program in the working RAM and start executing from address 0. The programmer however must be careful so that their program does not corrupt itself.
Like anything in engineering, there are tradeoffs - you have to understand what you are giving up and what you are gaining. Why would you choose one over the other? At this point you cannot make this choice because you don't know enough. And that is OK. I will pick for you. We will make our computer Von Neumann. Its not a big deal if you make a wrong choice, you will learn either way, as long as you don't give up. You just have to create things.
In the name of speed, size, power efficiency and security, modern computers are so ridiculously complicated that we can no longer cleanly separate them into classes like Harvard or Von Neumann; they have various components that are some mutations of each, or neither.