Programming frustration

Chipping in a bit late here, but the questions that you're asking make it seem like you've tried to do a bit too much a bit quickly (probably copied and pasted a bit much along the way) and you've ended up with half a grasp at a small subset of an incredibly complex language.

Games are always good fun to code, and everyone seems to want to have a stab at making their own - but in all honesty I'd recommend stepping back from this for a while and advise you to get a good book (much like celegorm has already suggested.)

It seems that coding is nothing more than just memorizing.
I think you have this from copying / pasting a bit too much without understanding what's going on - and again this is where you'll find a lot of benefit going through a good book from the beginning.

Coding is far from memorising - sure, you'll have to memorize some bits, but they're comparatively little really. The real challenge of coding is putting those bits together in a way that will build a system. And this all comes with practice - just start a bit smaller than a fully fledged game (a calculator perhaps?) and forbid yourself from ever blindly copying something without understanding at least vaguely what it's doing :)
 
Tony gaddis', starting out with programming logic and design 3rd edition is a great book to read and understand. We used that book im my programming class in college. It teaches the basics without teaching a particular language.
 
ok...

I'm not sure I agree with the idea of teaching, because I think in finding out what you want to know I'm about to confuse you more.
This is sadly why C is taught in a variety of ways which are mostly the same. introduce functions, get you comfortable with syntax, then explain the boring bits.

That, and keeping interest tends to mean that you're taught a few how to do crap, before you're taught why to do crap. when you get to the why you already got the boring bit by being excited about writing something useful. - e.g you'll turn more students off by leaping into advanced theory why and wherefores before actually producing a bit of code.

# is a compiler directive.

#define, suggests that you are going to define something.
#include means you want to include a whole other file of source code.
I have in my time used #byte to assign a value to a variable. for example if you use the CCS C compiler and the 16f628 series of chips, port a (name on the datasheet is controlled by register 5. to assign a variable to a register the precompiler used #byte e.g #byte PORTA = 5
(note no trailing semicolon)

Compiler directives may change depending on what compiler you are using. for example the borland compiler has no #byte predirective, neither does GCC on windows or Linux, (though this might if you're targetting embedded processors).


a char is simply a charector, a signed 8 bit number = -128 to 127
an unsigned char is a char without a signed bit e.g. 0 - 255


in relation to cout, cin these are as you say a way to get characters into or out of your program.



As for when you may use a loop or some other logic statement...
In everyday life you kind of sub consciously use loops all the time. (what's my temperature, is is comfortable? put on or take off jumper)

so you have a start point, a question or condition, you check if the condition is satisfied and depending on the outcome you do it all over again.
Code:
start, 
is paint dry?
if no then got start
if yes then carry on...
so if is important to learn how to use loops.

If you want to write the words hello worls 100 times you can either write
Code:
cout << "hello world";
cout << hello world
...
...
and on

or.
Code:
x = 0;
while (x<=100)
{
x+1; //make x bigger each time you go through the loop else you;ll be stuck in it forever;
cout << "hello world";
}




as before, #include is a way to include other files.

#include <math.h> includes math related libraries and functions
#include <my_other_code_functions.cpp> would include code functions that you may write yourself, (so if you have a library of stuff build up over a long time you can just reference stuff without inventing the wheel each time.

int is short for integer, every function is expected to be able to return something, that main defines your main function, main functions return status codes 0 for success then anything after that.

so for example I could have a function,
Code:
int add(int a, int b)
{
int c;
c = a + b;
return c;
}
this is a function that will be passed two integer values and return one integer value.
you see inside the function a third integer is defined (c) this is the one that gets passed back.


iostream is the header that controls the.. well io stream,
you need it before you can use cout and cin as these functions are defined inside that header
<iostream> - C++ Reference

using namespace std is a bit like that io stream thing.
it is quite literally saying using a standard namespace called std.

you can assign other names to your name spaces.

using namespace bob, then name space is the space that holds the named variables...

as you name variables inside namespaces you can re-use some variable names.

for example
Code:
#include <iosteam>
using namespace std;
int x = 1;

namespace bob
{
int x = 2;
}
using namespace john
{int x = 3;

int main ()
{
cout << x << '\n'; //(will cause the value from the standard name space (1) to be output)
using namespace bob;
cout << x << '\n'; //(will cause the value from the bob name space (since we're using that now (2) to be output)
using namespace john;
cout << std::x << '\n'; //(will cause the x value from the Std name space to be output, even though we're using namespace called john, we explicitly request std::x

cout << bob::x << '\n'; //(causes the x value from namespace bob to be output (2) again even though we're using namespace john.


quote why you'd want to assign the same variable name in different name spaces and most likely ultimately confuse yourself is something I'm yet to fathom out.

(i mean why not use int bob_x, john_x etc)


Is that any more or less clear?
 
Root,

This is brilliant information. I will copy and paste what you wrote and add it to my programming library. That is a wealth of knowledge and it makes more sense to me. The # compiler directive always tripped me and I wondered why I needed it. Now I know.
#include is to include other files. The word "include" made sense because I used it in C++ to add other files to my programs. The # is what I couldn't understand.

The variables make sense now. I used char quite a bit and looked it up on google to try to understand how to use it.

You made it much clearer. This is what I need to focus on why I am putting the variables and loops in a syntax rather tan just memorizing and not knowing much about them.

Cheers!
 
To be honest, I think that it is best to learn by doing.
i.e you make a program to do something, and in making a program that does something you have a start/middle and end.

and when you're learning it really is best to keep it simple!

when I say start middle and end I mean.

If I say write a program to open up a console, display the words hello world, and then close.

can you do that?

next exercise:
create a program that asks you for a number and lets you type in a whole number,
then asks you for a second number and lets you type in a whole number
Then it should add those numbers together and display the answer.
number 1 + number 2 = answer
then exits.


The trouble is that this is a lot more boring than you'd imagine.
It's the lab exercises such as that that most people try to avoid. -which in a way is what you've done when you first started, if you copy any paste pre-existing code then it's difficult to say that you know how to program, it's more real to say that you know how to work a compiler.

That hello world program is usually the first thing that you're taught in any program.
the whole point of a program is to be able to do something, usually take input and give results, thus the ability to communicate "in some way" is principally the most important thing a program can do. -if a program just took in data and never gave out data then it's not really a useful program is it.


The second program is important because it teaches you how to get data into the program.

The mission take and add up two whole numbers suggests that you should define data types as integers.
then you scan in numbers to the program and you expect integer numbers.

but you should check what happens if I write Alice when asked for the first whole number, what if I write bob when asked for the second. what's the result? and why? etc...

Then you get into thinking about what variables you're using, the data types. How if a user has a whole keyboard you can't trust them to only press the numbers when instructed. etc...

Then when you're comfortable with "basic" programs move onto more advanced ones...

Sadly, most people will never stop to wonder what they are doing, and why they are doing it like you have, so most will never understand the datatypes they are using, or how or why they are doing certain things.
 
Jcrew, firstly based on how you're phrasing your questions you certainly have noble aspirations and a keen interest to understand exactly what each line of code means. I have to say this is a personal quality which the vast majority of students on my CompSci degree completely ignored. I commend you highly for this.

Secondly, I can't possibly begin to provide detailed answers to all the (very good) questions which have been raised thus far in this threat, however I hope that I can provide you some additional direction to complement what others (celegorm and root particularly) have already offered.

I'm just dealing with C/C++ here since that is what most interest appears to be in. And whilst learning Java first is probably easier as it abstracts a lot of technical concepts away from the programmer, this actually makes what you're trying to learn (how things really do what they do, and why the code is written the way it is) much harder and hence I strongly recommend you stick to C to begin with (not even C++ as this is a much larger superset of C and you needn't worry about any of that until you're capable with C).

To that end, I cannot stress enough the importance of one single book (http://books.cat-v.org/computer-sci...ge/The.C.Programming.Language.2nd.Edition.pdf for an online copy or http://www.amazon.co.uk/The-Programming-Language-2nd-Edition/dp/0131103628 to purchase - MAKE SURE YOU GET THE SECOND EDITION (as per the links)!)

The above is literally the book on the language, and explains practically every construct in minute detail. It is referred to as the C bible and is well known as 'K&R' after the authors who invented the language.

Disclaimer: I cannot guarantee that this is absolutely 100% correct on all technical details, having worked with C professionally over the last 7 years I have learnt that every week you 'learn' something which you thought you already knew!

So, some detail. Others covered the fact that '#' is a preprocessor directive. The preprocessor is one stage of the 'build process'.

Build process

Essentially to 'build' (make an artefact e.g. executable or library) your code, several passes are done. These transition from source the final artefact, i.e. .c -> .o -> .exe or .so or .a
Note: Whilst on Windows you'd get .exe and .dll, on linux and other platforms you'd get a blank extension (i.e. myprog vs. myprog.exe) and .so or .a for a shared library or static library, respectively.

1) First the preprocessor is invoked, this executes all #define statements and replaces wherever the named symbol occurs in your source code with the actual definition (apologies for not giving examples of all the things I'm talking about here, it's all in the book above though!).
As well as #define statements, the pre-processor also resolves all #include directives to ensure that any header files are present on the system (you should never include a .c/.cpp file unless you really know what you're doing and why you have to..). This is done because there is no point even starting to compile your own source code if the code upon which it depends isn't even found on your system.

For more information on this side of things, read about the 'Include Path' which needs to be set for your compiler to be able to find all the include files you want. It is very back practice to have absolute paths - or even long relative ones - in your source code as it makes portability and readability difficult.

The pre-processor probably does other things as well, but these are the most important two.

2) After the preprocessor has finished (assuming without errors) then the compiler is invoked. This where all your source code is checked for syntax errors, this could be missing ';' between statements (a statement is a single specific instruction - I have used several keywords deliberately in this post as they are correct terminology, however I can't explain them all but you'll recognise them when you see them elsewhere), illegal use of reserved keywords (e.g. a variable called 'int') etc.

I italicised the 'your' in that section because the header files which you are 'including' via the #include directive should have compiled objects already associated to them. However, if this is not the case then these too will be compiled.

3) After compilation (again assuming no errors), you get to the linking/linker phase. This is the final stage of the build process. At this point you'll have a series of binary object files (typically 'mysourcefile.o') which relate to each of your source files, however you have no executable, or library artefact, which you can actually run or share with another application, respectively.

The Linker is a stage which combines all these independent objects into the desired output, like a executable program in the simplest examples. Many programmers (even those who've completed a degree and done 1-2 years professional experience) will fall foul of the infamous 'undefined reference' error message, which is commonly mis-described as a compiler error. In fact this is a linker error and basically means that an object which the linker is currently processing makes reference (calls/invokes) to a function which exists in one of the other objects, however the linker has not been told where this other object lives, and hence can't see the definition which this reference relates to.

Data Structures and Algorithms

Once you've understood these principles you can move on to more abstract constructs like Data Structures and Algorithms - this is the foundation of every program and a solid understanding of how and when to use what data structure (and associated algorithm) is the first and most important criteria when hiring any programmer. This is a language-agnostic skill and clearly demonstrates logic, creative problem-solving and provokes thought about what design decisions and user requirements will influence the functionality and performance of any program.

I won't go on any more, there should be more than enough there for anyone to digest in one go. I will leave you with one final thought though...

Debugging is your friend, you'll learn 10x more fixing a program that doesn't work vs. writing it 'correctly' (i.e. one which does what you want it to, not necessarily in the best way) first time around.

Hope that helps,
Michael.
 
I'm sure you used to be able to edit posts here.. but I can't seem to find that now, so:

Edits:
1) ...thus far in this thread**
2) ...from source to the final artefact i.e. .c-> ...
3) ...this interprets (not executes) all #define statements...
4) ...it is very bad practice to have absolute paths...

Apologies for any confusion caused.
 
Back
Top Bottom