I call my self a c++ programmer but i have been using a mix of c and c++ for all my work. my work centers around scientific computing or numeric programming. i have written a multiobjective genetic algorithm, a feed forward neural network that uses back propagation training and am currently writing a fuzzy inference engine.
my preferred way of handling multidimensional matrices is to model them as a single dimension array. as an example lets say we want to process a 2 dimensional matrix then i handle it as a one dimensional array using the following mapping:
twoDimArray[i][j] = oneDimArray[i*numCols+j] ;
i declare the oneDimArray has a double * and then dynamically allocate memory. one of the main reasons why i shifted to using a one dimensional array is so that i can pass that array to a function as constant array with a constant address.
void fun(const double * const oneDimArray) ;
also this allows me to avoid ugly constructs such as
void func(const twoDimArray ) ;
that’s the history, now the sad present.
with my previous arrangement everytime i tried to access a value outside of the valid range i would get a very familiar segmentation fault. with stl vectors however the following is possible
const unsigned int numElements = 10 ;
vector <double> array(numElements) ;
array = 3.14159 ; // eeeks !!!
you see stl vectors are dynamic and are able to resize themselves but now i don’t know if i m going outside the valid range. in the pre stl days a seg fault was a warning that i was doing something wrong but now i have to be extra extra careful 😦