< PrevAllNext >

Computer Science 101: The Basics

(Dec 10th, 2013 at 02:23:37 AM)

Every day this week I am going to take some time to explain a little bit about computer science. After hearing about computer science education week, I really wanted to extend my knowledge to people that don't know much about programming and that are interested in learning a little more about how it all works. I plan to write five short blogs detailing another piece of the puzzle. At the end, don't expect to be able to run off and start programming. My goals is just to give you a little better an idea of what programmers do and how computers work.


For people that aren't very fluent in computer science, there are two terms that I think people tend to at least know, even if they don't understand them entirely: "algorithm" and "binary". This is probably because these two concepts are at the core of most of computer science, so I want to start this week out by giving a brief description of what each are and why they are important.

Algorithms

An algorithm is basically just step-by-step instructions on how to perform a given task. The algorithms involved in computer science are typically fairly abstract. Though this make sense since knowing how to sort numbers doesn't necessarily mean you know how sort words or species of animals. Instead, algorithms for sorting are meant to be very general so that programmers can use them for whatever their program requires.

There are loads of algorithms available to programmers these days for calculating mathematical results or securely encrypting data (which basically just mean mixing information in a clever way) such as credit card numbers or passwords. And because of how general many of the algorithms are, programmers are often able to re-use algorithms that have already been used by others—even without knowing how they work!

Binary

Binary is a term thrown around by many people without really knowing what it means or why it's important, other than that it's used in computers. As an idea, binary is base 2. Decimal, base 10, is (per Wikipedia) "the numerical base most widely used by modern civilizations." To get a feel for the difference, in decimal numbers go like this:

0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, and so on.

In binary, the digits 2 through 9 do not exist, leaving only 0 and 1. In the same way, you count up until you have no other digits available then shift a decimal place (or in this case "binary place"):

0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010, 1011, 1100, and so on.

Note that these are still 0 through 12 value-wise. So instead of 10's places and 100's places we have 2's places and 4's places. So the same way 111 in decimal is equal to 1*hundred + 1*ten + 1*one, 111 in binary is 1*four + 1*two + 1*one, or the equivalent of 7 in decimal. However, in computers, binary is used to represent all data. For example, the eight bits (binary digits) 01100001 could represent the lower-case letter "a", or the number 97, or many other things.

Computer Science

As a whole, computer science is great balance of a practical art and a hard science. There are many aspects of computer science which are very complicated and math-based. However, with a basic knowledge of what is involved, it can be easy to get started and make something simple. From there the rest is just problem solving. But whether you want to program or not, I think it is very important to understand how computers work and the unbelievable amount of effort that has gone into making this all possible. So be prepared for things to ramp up (though I'll try to keep it reasonably simple) as I, throughout the week, explain all of the parts of what I do that I think about nearly every day.

Also, in the coming days I will be bringing up both algorithms and binary again, so make sure you understand them. And don't forget that Wikipedia and Google are invaluable resources for learning just about everything. :)

Comments