CPU scheduling in latest multicore processors

Faslan

Solid State Member
Messages
19
how do cpu scheduling working with latest CPU( I7,etc)
im asking because cpu scheduling mechanisms designed (FCFS , SJF or Round Robin)to overcome the problem of " The computer CPU is only able to execute the instructions one by one and one after another".

but modern CPU has 2,4,8 cores so it can execute more than one instuction at same time

thank you
 
it's not cores you need to worry about but threads.
(so single core hyper threaded PCs are able to execute multiple instructions.)

(with cores resources are not shared, but pre-processing resources can be shared in hyper threading.)

the individual core, (or individual thread for cores that have hyper threading) can still only execute one instruction at a time, and those need to be executed in order (with the core I series).

the new Intel Atom processors do out of order processing, but you have to still schedule things properly to avoid trying to process data that you have not yet created. (if that makes sense).

I guess the easiest way to answer your question is:
some tasks must happen in series.
consider these tasks
1, get a piece of paper
2, draw a blue house,
3, draw windows on the house
4, paint the roof red
5, paint the windows blue.
6, tell me, what colour is the roof
7, what colour are the windows

there are a few things that may not be obvious.

First that is not an optimum task list.
you draw a blue house, put down the blue pen and pick up a red pen, then put down the red pen to pick up a blue pen.
so if you were scheduling instructions for your CPU to do this, you'd put them into the best/most efficient order.

1, get a piece of paper
2, draw a blue house,
3, draw windows on the house

4, paint the windows blue.
5, paint the roof red

6, tell me, what colour is the roof
7, what colour are the windows


now imagine that you have help, like a second core (not a hyper thread yet)
you still have th same basic seven tasks, but you need to co-ordinate between two people.

so person 1 can get the paper, and person 2 can draw the house.

But the thing is that person 1 can't start drawing the house until it knows that person 1 has the paper ready.

because there are no shared pre-processing resources the fact that there are two cores actually means that more checks need to be done.

person 2 can't draw a house until person 1 has the paper.
person 1 can't colour the roof until person 2 has finished drawing the house.

person1 and person 2 could take a colouring task each, (person 1 can colour the roof and person 2 colour the windows).

Either can say what colour those are, but of course you can only complete the task that asks what colour they are after they have been coloured.

(so now as well as ordering tasks efficiently, you need to make sure that tasks are possible. if you give person 1 tasks 1 - 5 and person 2 tasks 6 & 7 then person 2 has to wait about until person 1 has finished.

there is no time gain, and worse, the resource 2 is idle whilst resource 1 works, and resource 1 is idle at the end whilst 2 works.



the same is true for hyper threading.
but because they share pre-processing resources you can imagine that instead of person 2 waiting to be told that the paper is there so they can go into a room and start drawing.

it's more like they are already in the room and can see that the paper is there and they can start drawing.



obviously out of order processing makes sense, (if you get the paper, draw the house and colour the roof you can say what colour the roof is before the windows have been drawn or coloured.) but you can't say what colour the windows are before they have been coloured.


whichever way you look at it:
each core or each thread can still only do one thing at 1 time.

multiple cores, and multiple threads do add some speed by being able to do tasks in parallel with other cores, (like the picture drawing example above). but this does add some complexity. (for example your OS has to be aware of the technology and optimised to use it.
 
Back
Top Bottom