The Visitor Pattern – part 1: basics and introduction

The Visitor Pattern is one of the base implementation patterns in software development. It can solve a base use case which occurs in nearly all applications which use inheritance to implement a system of base classes and derived classes. But unfortunately, the Visitor Pattern comes with a big disadvantage: it is one of the most misunderstood and misused patterns I know. Therefore, I want to spend some time to analyze the pattern, explain its strength, look for the reasons for the misunderstandings and show some ways to implement the pattern. As these are large topics I will split it up into several articles. Within this article we will start with the basic ideas of the pattern and have a look at the technical background. Within the next articles we will continue with concrete implementation examples for different use cases, remove the misunderstandings by finding the real meaning of the pattern, compare it with other patterns and have a look at the pros and cons of the Visitor.

 

Motivation

Collections are widely used data type structures. In object oriented systems, these collections often contain objects of different types, stored as pointers to a common base type. Algorithms or queries which perform operations based on these data collections will on one hand use the common base data interface and on the other hand need to use the type specific interface. Furthermore, the data collections may have very different kinds of complexity: list, tree or other structure. Independent of the data structure and independent whether the elements are stored as base class pointer, we want to iterator over elements and execute some algorithms based on these elements.

 

Reasons for Misunderstandings

Before we look at the Visitor Pattern I want to explain why there are is so many confusion and misunderstanding regarding this pattern.

Many of the software design patterns are easy to understand. The patterns itself and the involved classes and functions have clear names telling us what they are used for. The pattern implementations have a simple structure and they use well known language specific features.

But the Visitor pattern is different. It hasn’t a simple structure but is more complex. It hasn’t a clear naming. The name Visitor may arouse expectations which can be totally wrong. The interface name and the function names are meaningless too. And finally, it isn’t based on language specific features but it is a workaround for language specific limitations.

You can see the confusion about the Visitor pattern if you read a couple of articles and compare their statements. You will find very different statements about the purpose of the pattern, the use cases it solves and the reasons why you should use the pattern. Please, don’t get me wrong, these articles are not incorrect at all but they use the Visitor pattern in a not suitable context. There are many use cases which can be solved better with other patterns ore oven with standard language features.  In these cases, the Visitor pattern is over-engineering which results in some big disadvantages like less understandable and less maintainable source code with higher chance for errors.

Within this and the future articles I want to analyze and explain the Visitor pattern in detail to clarify and remove all the sources for misunderstandings.

 

Basics

After the long foreword and introduction, we will now start get to know the Visitor pattern and its implementation. At first, we will try to find a definition for this pattern. As mentioned before, you will find a lot of different explanations and definitions which are not contrary but may contain too much details and facts which are true in general but not specific for the Visitor pattern. Therefore, I want to give this short definition:

The Visitor design pattern is a way to traverse over all elements of a data container and get the type specific element instances. It separates the algorithms from the object structure on which they operate.

The definition mentions three important points: traverse elements, get type specific instance and separation of concerns.

Let’s compare this definition with the definition of the Iterator pattern: The Iterator pattern is a design pattern which uses an iterator to traverse a container and access the container’s elements. It separates the algorithms from the object structure on which they operate.

So, the Visitor pattern and the Iterator pattern are very similar. They will both separate algorithms from the data structure. Therefore, this separation of concerns isn’t a criterion to choose this pattern. But unfortunately, you will find a lot of articles which use this criterion as the reason to choose the Visitor pattern.

But there are also some differences between the two patterns. They have a different way to traverse the elements and they will return different element instances. The Iterator pattern returns the element as it is. If the data container holds elements as pointers to the common base class, the base class element pointer is returned. In contrast, the Visitor pattern will return the specific element type instance. Therefore, it will not return the base class pointer, but will return the specific derived types. The way to traverse the elements and the way to return the elements are the real characteristics of the Visitor pattern.

The Visitor pattern is implemented by using “double dispatch”. This technique allows to access the specific element types without any workaround like casting. To understand the pattern, we must understand this implementation technique. Therefore, we will continue with a short analysis and comparison of “single dispatch” and “double dispatch”.

 

Single Dispatch

The decision of which version of a method to call based on a single object is called single dispatch. This is the standard way of calling a function and it is directly supported by common object-oriented languages. When you call a regular virtual function, it is a single dispatch. The code that gets executed depends on the runtime type of a single object.

The following code shows an example application which uses single dispatch. We have date elements with common virtual and object specific non-virtual functions. The element instances will be stored within a container as a reference to the base object. Furthermore, we have a printer algorithm which loops over the container elements and calls the common virtual and object specific non-virtual function to print the outputs.

//------------------------------
// elements
//------------------------------

class BaseElement
{
public:
  virtual void DoSomething(){ std::cout << "DoSomething() of Base was called" << std::endl; }; 
};

class DerivedElementA : public BaseElement
{
public:
  virtual void DoSomething(){ std::cout << "DoSomething() of A was called" << std::endl; };

  std::string GetName() { return "element a"; }
};

class DerivedElementB : public BaseElement
{
public:
  virtual void DoSomething(){ std::cout << "DoSomething() of B was called" << std::endl; };

  std::string Name() { return "element b"; }
};

//------------------------------
// data container
//------------------------------

class DataContainer
{
public:
  std::vector mElements;
};

//------------------------------
// algorithm
//------------------------------

class Printer
{
public:
  void ShowAll(DataContainer& data)
  {
    for (const auto& element : data.mElements)
    {
      element->DoSomething();

      DerivedElementA* elementA = dynamic_cast(element);
      if (elementA)
      {
        std::cout <GetName() << std::endl;
      }

      DerivedElementB* elementB = dynamic_cast(element);
      if (elementB)
      {
        std::cout <Name() << std::endl;
      }
    }
  }
};

//------------------------------
// application
//------------------------------

int _tmain(int argc, _TCHAR* argv[])
{
  DerivedElementA* elementA = new DerivedElementA();
  DerivedElementB* elementB = new DerivedElementB();

  DataContainer container;
  container.mElements.push_back(elementA);
  container.mElements.push_back(elementB);

  Printer printer;
  printer.ShowAll(container);

  delete elementA;
  delete elementB;

	return 0;
}

 

The source code shows two examples for single dispatch function calls used to call the element functions. Based on the concrete instance we can call the object specific functions and based on the base class instance pointer we can call the virtual function and the according object specific function is executed.

C++ can do this run-time lookup dynamic dispatch for one type at a time, so it is a single dispatch. But c++ is not able to select a function based on two dynamic types, which would be a double dispatch. Another language specific limitation comes with the virtual function. We cannot add a virtual function to a class hierarchy without a modification of the base class which provides the interface.

Within the example we can see these limitations. It would not be possible to call the type specific functions of the derived classes without the use of any workaround, in our case the use of a dynamic cast. This workaround is necessary as c++ does not support double dispatch.

 

Double Dispatch

Double dispatch allows to call a method by a dynamic dispatch based on its runtime type. Furthermore, the decision of which version of a method is invoked based on a combination of two objects is also called double dispatch. Double dispatch is not directly supported by C++. Workaround for these limitations are dynamic casts, the Visitor pattern or the use of according third-party libraries.

We will now change the previous example a little bit. We will try to eliminate the need of the dynamic cast. This could be done if we implement an own double dispatch mechanism. Our goal is to get the runtime specific object by the base class pointer. So, we must add a virtual function to our base class to request the actual object instance. Each derived class can override the virtual function and returns its actual type specific instance. Furthermore, we need a receiver which can accept these different type specific instances. In the example, the receiver is the printer algorithm. Therefore, we will extend the printer with an interface to receive the object instances. And the element class will be extended by the virtual function to request the element instance.

//------------------------------
// forward declaration
//------------------------------

class DerivedElementA;
class DerivedElementB;

//------------------------------
// printer interface
//------------------------------

class IPrinter
{
public:
  virtual void ShowDerivedElement(DerivedElementA* elementA) = 0;
  virtual void ShowDerivedElement(DerivedElementB* elementB) = 0;  
};

//------------------------------
// elements
//------------------------------

class BaseElement
{
public:
  virtual void DoSomething(){ std::cout << "DoSomething() of Base was called" << std::endl; };

  virtual void GetElement(IPrinter* printer){};
};

class DerivedElementA : public BaseElement
{
public:
  virtual void DoSomething(){ std::cout << "DoSomething() of A was called" <ShowDerivedElement(this);
  };
};

class DerivedElementB : public BaseElement
{
public:
  virtual void DoSomething(){ std::cout << "DoSomething() of B was called" <ShowDerivedElement(this);
  };
};

//------------------------------
// data container
//------------------------------

class DataContainer
{
public:
  std::vector mElements;
};

The printer algorithm must implement the instance receiver functions and is now able to call the type specific element methods within these instance receivers.

//------------------------------
// algorithm
//------------------------------

class Printer : public IPrinter
{
public:
  void ShowAll(DataContainer& data)
  {
    for (const auto& element : data.mElements)
    {
      element->GetElement(this);
    }
  }

  void ShowDerivedElement(DerivedElementA* elementA)
  {
    elementA->DoSomething();
    std::cout <GetName() <DoSomething();
    std::cout <Name() << std::endl;
  }
};

//------------------------------
// application
//------------------------------

int _tmain(int argc, _TCHAR* argv[])
{
  DerivedElementA* elementA = new DerivedElementA();
  DerivedElementB* elementB = new DerivedElementB();

  DataContainer container;
  container.mElements.push_back(elementA);
  container.mElements.push_back(elementB);

  Printer printer;
  printer.ShowAll(container);

  delete elementA;
  delete elementB;

  return 0;
}

 

With this kind of implementation, we could successfully remove the dynamic cast. The algorithm instance has called an element method which has done a call to the algorithm method. We have executed two v-table calls to assign the operation to the data. The functionality is based on a combination of two objects and so we have implemented a double dispatch behavior.

Of course, you may ask whether it makes sense to implement such a complex system of mutual method calls just to have something which behaves like double dispatch. We could use single dispatch instead and will have the same results. This is a very important fact! Such a double dispatch implementation is always complex, creates additional dependencies and makes the source code harder to understand and to maintain. But on the other hand, it can add some benefit, like the elimination of the dynamic cast, which helps to eliminate errors. I would suggest using single dispatch as default implementation technique. You should use double dispatch in use cases only, were the implementation leads to benefits which exceed the disadvantages of the complex source code.

 

Definition

Within the “Basics” chapter we already seen a short definition for the Visitor pattern. Based on this definition and based on the double dispatch technique I want to explain the Visitor pattern in more detail.

The Visitor pattern defines an infrastructure to access a complex and dynamic data structure.  It implements an enumeration over the data structure elements independent of the structure type or complexity. To access each element a double dispatch technique is used which requests the specific element type instance and provides these instance to a receiver. This allows to implement algorithms based on the data structure. These algorithms will use the enumeration feature to traverse all elements and the algorithms implement the element instance receiver functions. So, they get access to the type specific instances of all elements.

The visitor pattern allows a separation of the data structure, the traversing over the structure and the algorithms based on the data. Furthermore, it allows to access the specific object independent whether it is stored as specific object or a pointer to any of the base classes. So, the data structure can hold elements without knowing their real types. Unfortunately, the pattern can only be implemented by using double dispatch. As we learned, double dispatch is based on a combination of two objects. This unavoidable adds a dependency between these two objects, in this case between the data elements and the algorithms. Later on, we will see the issues and disadvantages of this dependency.

 

Structure of the Pattern

The definition and explanation above already names the components of the Visitor pattern. These are:

  • Data Elements
  • Data Container
  • Enumerator
  • Algorithm

 

If you look at the Visitor pattern definition, done by the “Gang of Four” you will not find an “Enumerator” component. In my opinion, the pattern should not be used in use cases where you don’t need to enumerate over elements. There are much better patterns available in these cases. I will explain this in more detail later.

Many of the examples and definitions out there also respect this aspect and contain a traversing over the data elements. But most of them will not explicitly separate the algorithms from the traversing. But as these are very different concerns, they should be separated into two different components. Therefore, my big picture of the pattern explicitly contains the “Enumerator” component.

 

Naming

As mentioned before I don’t like the name of the pattern and its components and functions. They are not bad at all but I think they are to general. They are suitable to describe a generic concept like double dispatch but they are not sufficiently concrete to describe such a specific design pattern.

AS we have seen in the double dispatch chapter, a client will not use the element object directly. Instead it will call an element function which should “accept” the client as receiver for the element instance. Within this element function the element instance is send back to the client so the client can “visit” the specific element instance. Therefore, the visitor pattern defines these two methods: “accept” and “visit”. And according to this visiting thought it is called “Visitor”. But you see, even within this attempt for a serious explanation, the naming sounds strange.

Within the article and the code examples I want to use a clear naming. Of course, I will continue to call the pattern “Visitor pattern”. Furthermore, I want to call the interfaces according to this pattern. This will ensure an explicit relation with the pattern and I think the source code will be easier to understand if common names are used. So, I want to call the interface which must be implemented by the data elements “Visitable” and the interface of the algorithms and enumerators “Visitor”. The function to get the element instance well be called “GetInstance” and not “accept”. The function to transfer the instance to the instance receiver will be called “InstanceReceiver” and not “visit”. This function could also be called “ProcessInstance” or something in this way, but as we don’t know whether the algorithm reads, writes or even uses the element I prefer such a general name like “InstanceReceiver”.

 

Example application

Within the further progress of the article I want to introduce a simple example application. This application should show use cases of a simple shop system. It contains some articles which get stored in simple lists or even complex structures. And we want to create some queries to access and analyze the data. The example will therefore contain the components we have seen so far: elements, data containers, enumerators and algorithms. Within the example we will call these components: elements, containers, enumerators and queries.

 

Summary and Outlook

This first article about the Visitor pattern was thought as short introduction into this topic. But the resulting size of the article is a good indicator for the complexity of the pattern. So far, we got an overview about the use cases which could be solved by the pattern. We have analyzed the technical background needed to implement the pattern. And we build up a detailed background knowledge to understand the purpose of the Visitor. Based on this knowledge you will not belong to the group of developers which misunderstands the pattern and therefore don’t use it or use it in a wrong way.

Within the next articles we will expand our knowledge and see some concrete implementations of the pattern. These implementations will cover the different use cases like enumeration and dispatching and they will show different techniques to implement the pattern, like inheritance, composition or templates. Furthermore, we will analyze the pros and cons of the pattern to get a better feeling whether a use case can be solved by the pattern or not.

Advertisements
Veröffentlicht unter C++ | Kommentar hinterlassen

Lambda Closures in C++

A closure is a concept of functional programming languages. C++ is a multi-paradigm language which offers features of different programming paradigms. But C++ isn’t a real functional language. There are huge differences between a pure functional language and C++. Most implementations done with C++ are based on the procedural paradigm or the object-oriented paradigm. But of course, there are functional programming techniques which can be used in procedural or object-oriented environments too. If these techniques are used wisely, we can benefit by the advantages of the different programming paradigms. On the other hand, if these techniques are used in the wrong way, the resulting implementation can be a ragbag of the disadvantages of the different paradigms.

Within this article I want to explain how lambdas are used to implement closures. And I want to show a way to use this functional feature in mainly non-functional applications, e.g. in applications based on the object-oriented paradigm.

 

Lambda expression

A lambda expression is a mechanism for specifying a function. The primary use for a lambda is to specify a simple action. A lambda expression can be implemented directly as anonymous function, for example within an algorithm method, or it can be named and used by its name. Please read this article for a base explanation of lambda expressions and their alternatives like functors.

From a software design point of view, the main difference between a normal function and a lambda is the scope. Normal functions have access to the variables defined within their own scope. But functions created by lambda expression have access to their own scope and access to the parent scope where they are created.

The following implementation shows an according example. We want to write a function which increases all elements of an array by a variable addend. This can be implemented by using the STL algorithm to transform. We can pass a transformation function to this algorithm. In this kind of implementation, the transformation function needs access to the parent scope variable for the addend. So, we can implement an according object to store this variable, e.g. a functor, or we can use a lambda expression which has access to the parent scope.

void IncreaseAllElements(std::vector& data, const int addend)
{
  auto Sum = [=](int element) {return element + addend; };

  std::transform(data.begin(), data.end(), data.begin(), Sum);
}

int _tmain(int argc, _TCHAR* argv[])
{
  auto data = std::vector {-5, 1, 15};

  IncreaseAllElements(data, 20);

  return 0;
}

 

This example shows a very nice way to implement this simple use case. The code is easy to understand and to maintain. We have used a functional programming feature maybe without knowing anything about functional programming or closures. That’s fine in these kinds of implementations. But it can lead to some trouble in case the lambda expression and/or the parent functions will become more complex. In such situations, you may think about alternatives to the lambda expression. Within a previously article I wrote about the possible disadvantages and issues of using closures in procedural implemented applications.

Closure

Now we want to leave the procedural paradigm and implement in a more functional way. Within such a functional context we will see the real power of closures. So, let’s start with a more detailed definition and a technical way of looking at lambda expressions and closure.

A Lambda expression is an expression and as such it exists in source code only. At runtime, an object is generated based on the lambda expression. This object is called closure. A closure of a lambda is therefore something like an instance of a class. The closure object contains the functionality of the lambda expression and it contains the scope of the function where the lambda was created. Therefore, it can have access to the variables of this parent scope.

If we use the closure in the procedural way, like shown in above example, the closure will be destroyed directly at the end of the statement. But if we implement in a functional way, we explicitly want to create and use the closure. Typically, in such functional kind of implementations, the parent function containing the lambda expression is used will return a function. The client will store this function reference and use it several times. The following source code shows the implementation of the above example in a functional way.

std::function CreateVectorAccumulator(const int addend)
{
  return [=](int element) {return element + addend; };  
}

int _tmain(int argc, _TCHAR* argv[])
{
  auto data = std::vector {-5, 1, 15};

  auto accumulator = CreateVectorAccumulator(20);

  std::transform(data.begin(), data.end(), data.begin(), accumulator);
  std::transform(data.begin(), data.end(), data.begin(), accumulator);
  std::transform(data.begin(), data.end(), data.begin(), accumulator);

  return 0;
}

 

This short example shows the power of closures. Of course, you can implement the same functionality without lambda expression, but this will result in a more complex implementation.

In the linked article, I wrote: “… some see this feature as a must have while others see it as a way to write obscure and error-prone code.” And this article further strengthens this idea. If you implement according to functional ideas, closures are really a must have as they will increase the code quality. But if you implement procedural or object-oriented, closures tend to drift of in the opposite direction and their disadvantages will outdo their advantages. Therefore, please use this feature, but do not overuse it.

Veröffentlicht unter C++ | Kommentar hinterlassen

lambdas and scope variables

A lambda expression can access local variables of the scope in which it is used. You can pass these variables by value or by reference to the lambda function. The following example shows how to sort a vector of signed integers. They should be sorted by the absolute values and in descending order. Therefore, an according lambda function was created. Additional, we want to know the number of function calls performed by the sort algorithm So we add a variable in local scope and pass a reference to the variable to the lambda expression.

int _tmain(int argc, _TCHAR* argv[])
{
  auto data = std::vector  {-5, 1, 15};

  int numberOfFunctionCalls = 0;

  auto AbsoluteDescendingComparer = [&numberOfFunctionCalls](int a, int b)
  {
    ++numberOfFunctionCalls;

    return abs(a) > abs(b);
  };

  std::sort(data.begin(), data.end(), AbsoluteDescendingComparer);

  return 0;
}

 

The first pair of brackets “[…]” is used to pass local variables of the lambda’s scope to the embedded lambda.  The following syntax can be used

  • “[&]” to pass all variables by references
  • “[=]” to pass all variables by value
  • “[&x]” or “[=x]” to pass the variable “x” by reference or by value

 

Evaluation in context of procedural and object-oriented programming

From a technical point of view, it is very easy to use local scope variables in lambdas. But if we think about good software design we may see some issues. My first concern is: we create hidden function parameters which will make it more difficult to see the dependencies between the different parts of the application. Why do we have this feature of passing variables in the “[…]” part at all? We can pass all parameters as function parameters anyway. So why don’t we use the normal way we know from functions? In my opinion, it is not necessary to have two ways to pass variables into a lambda function.

But what if we don’t want to use this feature? How can we implement a use case like shown in the example above? This would not be possible anymore. And I think that’s fine! From a software design point of view, I would say following: a lambda function is “a” function and therefore it should do one thing only – like any other function. In most (or all?) cases we use the “[…]” feature we smuggle in some values into the function because we need these values for a second functionality which will be executed by our lambda. So, our lambda function becomes additional responsibilities which contradicts the software design principle that a function should do one thing only. If we have such use cases like above, we should think about suitable components, like functors or classes. A functor or class can offer a second function which implements our second use case. In this case it can offer a getter method which tells about the number of function calls and a function to reset this counter.

In general, I can see two main kinds of use cases for the “[…]” feature. If we pass variables by value, we have a factory use case. That’s means we need these values to initialize or set up the behavior of our lambda. In this case I would prefer a functor with an according constructor which contains these parameters.

If we pass the values by reference, we want to get additional results at the end or during the execution of the function. In these cases, we add a second responsibility to the function. Therefore, a class would be a better choice because it can offer as many function as needed for this use case. Furthermore, it can hide implementation details, e.g. like in the case above we have to introduce a counter variable which will be increased at any function call. This is an implementation detail which should be hidden by a class and not seen by the client.

Furthermore, functions with references to scope variables are functions with side effects. And side effects create spaghetti code. I know, the term “spaghetti code” is original used for a confused program flow mainly caused by jump conditions, but I also use this term for a confused data flow. I think, in modern software, a messy data flow caused by functions with side effects, is the main reason for erroneous code and for legacy code which is nearly not maintainable.

 

Evaluation in context of functional programming

Till now I expressed a lot of negative thoughts about scope variables. But of course, this feature does not exist without a reason. If we use it in a suitable way it can offer some advantages. These advantages come to light if we implement in a functional style. Within the example above, the lambda was embedded and executed directly within the parent function. This procedural way of implementation is normally done by function calls with function input parameter. But the conditions change if we don’t execute the lambda directly. Instead, our parent function should now return a pointer to the lambda function. So, the client will be able to store this function pointer and execute the function call later. This functional way of implementation can benefit from the possibility to use the parent scope where a function is embedded. This feature is called “closure”. In short it means, that a function can exists including its surrounding context. Therefore, in this functional programming context, the variables are no longer “passed” to the function, they are “captured”.

At this point I want to stop and don’t want to go into details about closures and functional programming with C++, as this is an own topic and will go beyond the scope of this article. But there will be a further article with the topic “closures in C++”.

Of course, I mentioned this topic in this short section to show the real use case for scoped variable access in lambda functions. But in most cases, we use C++ to implement in a procedural or object-oriented way. And within this programming paradigms, the scoped variable access most often results in more disadvantages then advantages.

 

Summary

Lambda functions offer a mechanism to pass variables of the local scope into the lambda function. From a technical point of view this feature is easy to use and allows implementation of lambdas in much more use cases. From a software design point of view, one should ask if all these use cases should be implemented by using lambdas at all or if there are much better alternatives. Therefore, some see this feature as a “must have” while others see it as a way to write obscure and error-prone code. And in my opinion, both are right.

Veröffentlicht unter C++ | 1 Kommentar

functor vs. named lambda vs. anonymous lambda

A lambda expression is a mechanism for specifying a function. The primary use for a lambda is to specify a simple action. A lambda expression can be implemented directly as anonymous function, for example within an algorithm method, or it can be named and used by its name.

Of course, there are already some other possibilities to implement functions. Therefore, we may think about the question when we should use lambdas and when we should not use them. Furthermore, we have to decide between named and anonymous lambdas. Following I will give you some guidelines which will help do find a decision.

This comparison is based on a simple use case: we want to sort a list of signed integers by looking at the absolute values and we want to sort in descending order. There exists an STL algorithm for sorting. So, we use this algorithm and provide a comparison function for two elements. At first, we implement this function by using a functor.

struct AbsoluteDescendingComparer
{
  bool operator()(int a, int b)
  {
    return abs(a) > abs(b);
  }
};

int _tmain(int argc, _TCHAR* argv[])
{
  auto data = std::vector {-5, 1, 15};
    
  std::sort(data.begin(), data.end(), AbsoluteDescendingComparer());
    
	return 0;
}

 

At next we implement the same function by using an anonymous lambda. This will give us the possibility to write the comparison directly within the sort algorithm and therefore we can create very short and compact source code.

int _tmain(int argc, _TCHAR* argv[])
{
  auto data = std::vector  {-5, 1, 15};

  std::sort(data.begin(), data.end(), [](int a, int b) { return abs(a) > abs(b); } );

  return 0;
}

 

Another possible implementation will be a named lambda. We can create this lambda first and use it later within the algorithm.

int _tmain(int argc, _TCHAR* argv[])
{
  auto data = std::vector  {-5, 1, 15};

  auto AbsoluteDescendingComparer = [](int a, int b)
  {
    return abs(a) > abs(b);
  };

  std::sort(data.begin(), data.end(), AbsoluteDescendingComparer);

  return 0;
}

 

Comparison of the implementations

If we now ask our self which of the three alternatives we prefer, I’m sure, most will say: the anonymous lambda. That’s because it allows to write the code in a very compact way without any overhead. But what if we ask about our preference in case we have to maintain code? In this case the decision may not that easy anymore. The anonymous lambda has one major disadvantage: to leave out the function name will result in a loss of information. The function name, if it is chosen wisely, offers a lot of information about the implemented function. Without this name, we have to understand the implementation itself to know what it is used for. In case we use an anonymous lambda we have to add a comment to explain the implementation.

Another issue which decreases the code readability and makes it more difficult to understand and maintain the code, is based on the mixing of the two concerns. We have the sort-algorithm and we have a comparer. By using a named lambda we have a clear separation of concerns. Instead, if we use an anonymous lambda, these two functionalities will be mixed together.

The separation of concerns will become even more importand in case the function itself becomes more complex. In such a case, an anonymous lambda as well as an named one are not suitable anymore because both are placed in the local scope of the including method. If such a method becomes too complex it should be refactored by extracting code into separate methods. Therefore, complex functionality should not be implemented by using a lambda.

Recommendations

For a simple functionality, needed in local scope only, a lambda can be used. Otherwise use a functor, a class member function or a non-member function. If you use a lambda I would recommend preferring the named lambda. An anonymous one should only be used if the functionality is simple enough to understand it in the same time it needs to understand the meaning of a function name.

 

Veröffentlicht unter C++ | 1 Kommentar

std::vector vs c-style array

Within this article I want to compare the base array type and the vector offered by the std library. I want to call these two types “c-style array” and “std::vector”. The comparison will show you the pros and cons of the different types and give you a guideline when to use what.

The STL offers another base container, the std::array. I don’t want to pay attention to this container within this comparison for two reasons. At first, it is very easy to decide whether you want to use a std::vector or a std::array. You should use std::array when the container size is known at compile time. If this size isn’t known at compile time or you container may grow or shrink, you should use std::vector. At second, a comparison between std::array and c-style array is important, but this is an own topic and I don’t want to mix different topics within one article. In a previous article you can find an according comparison

 

Use cases

The c-style array is by default a fixed size container. Therefore you may ask whether it makes sense to compare it with a std::vector. Within the following table I will show the base use cases for containers and the according STL type or c-style type which may be used for implementation.

Use Case STL Container c-style container
The size is fixed. std::array array on stack

(e.g. “int someArray[5]”)

The container will be created by actual data. Therefore, initialization has variable size. After this creation, the data will be used only and therefore the size becomes fixed. std::vector array on heap

(e.g: int *someArray = new int[5];”)

The size is variable and can be grow or shrink during runtime. std::vector array on heap

(and maybe some management data)

 

As you can see, the std::vector is used for all cases with a variable size container. As the STL does not contain a specialized container for the second use case we only have to distinguish between variable and fixed size containers.

The c-style array can be used to implement all three use cases. But now we can distinguish between the second and the third use case. To implement the size changes during runtime we may need an additional management component for the array grow and shrink functionality.

 

Base concept of std::vector

A vector is the base container. Bjarne Stroustrup suggest: “By default, use Vector when you need a container”. Therefore, if you don’t have any good reasons to use another container type, you should use a vector.

A vector is a dynamic array that goes on the heap. Its size does not need to be known at compile time. If you add or remove items, the underlying array changes size.

You can think of a std::vector as a wrapper to „int *someArray = new int[5]“ where a std::array is a wrapper to „int someArray[5]“.

 

Initialize, change, read

The following example application shows the initialization of containers with three elements. The vector size isn’t set explicitly. If you add an element to a std::vector it will reserve additional memory of necessary.

int _tmain(int argc, _TCHAR* argv[])
{
  // c-style array
  int values1[3] = {1, 2, 3};

  values1[1] = 15;

  std::cout << "size: " << (sizeof(values1) / sizeof(*values1)) << std::endl;

  for (auto element : values1)
  {
    std::cout << element << std::endl;
  }
  
  // std vector
  std::vector values2 {4, 5, 6};
  
  values2[1] = 25;

  std::cout << "size: " << values2.size() << std::endl;

  for (auto element : values2)
  {
    std::cout << element << std::endl;
  }
}

 

Within this first example you can see that the syntax for the variable declaration is slightly different, but element access and usage within a range based for loop is the same for both types. Furthermore, the example shows a first big difference of both containers: the std::vector knows its size. For the c-style array you have to create an additional variable to store the array size or, as shown in the example, you can calculate the size by dividing the memory reserved for the array by the memory needed to store a single element.

 

Use as method parameter

In a real application, you will split up your program logic into functions and therefore a standard use case is to pass parameters to functions. So, let’s change the example a little bit and create functions which should print the containers.

void PrintArray(int* values, int size)
{
  std::cout << "size: " << size << std::endl;

  for (int i = 0; i < size; ++i)
  {
    std::cout << values[i] << std::endl;
  }
}

void PrintVector(const std::vector &values)
{
  std::cout << "size: " << values.size() << std::endl;

  for (auto value : values)
  {
    std::cout << value << std::endl;
  }
}

int _tmain(int argc, _TCHAR* argv[])
{
  // c-style array
  int values1[3] = { 1, 2, 3 };

  PrintArray(values1, 3);

  // std vector
  std::vector values2{ 4, 5, 6 };

  PrintVector(values2);
}

 

This simple use case shows a very important difference between a c-style array and a std::vector. The c-style array will be passed as pointer to its first parameter. The std::vector instead offers the standard copy semantic. Therefore, you can use standard coding practices and for example pass the parameter by const reference. So, the std::vector doesn’t convert to a pointer unless you explicitly ask it to.

A c-style array is passed as pointer to the first element. This principle is called “pointer decay”. As the array decays to a pointer, the size information is loss. Therefore, we have to pass an additional function parameter which contains the array size. The previously shown calculation depending on the memory size will no longer work, as the memory size of the passed parameter is the size of a pointer now. Furthermore, as the size information is lost, the range based for loop cannot be used anymore.

You can see, this little design change of the application, has nearly no impact if you use a std::vector. You can simple refactor your code and extract the according code section as method. But if you try to do the same for a c-style array, you will get in trouble and as result of the pointer decay you have to change your implementation fundamentally. Within the following article you can see some other impacts of the pointer decay, especially in base and derived class relationships: Arrays and inheritance, a source of errors.

 

Use as return value

C-style arrays are among the few types which cannot be used as L-value. So, c-style arrays cannot be used on the left side of an assignment and therefore they cannot be used as return values too. But you can return a pointer to a c-style array. Unfortunately, this will result in with two big issues: you have to know the size of the created array outside of the method and you have to implement the resource management based on this wild pointer. Both issues are a good source for errors.

A std::vector supports the c++ copy semantic. Therefore, it can be used as return value. The following example shows possible implementation for both kinds of arrays.

int* CreateArray()
{
  int* values = new int[3] { 1, 2, 3 };
  return values;
}

std::vector CreateVector()
{
  std::vector values = { 4, 5, 6 };
  return values;
}

int _tmain(int argc, _TCHAR* argv[])
{
  // c-style array
  int* array1 = CreateArray();
  int size = 3;   // you have to know the size of the array
  
  for (int i = 0; i < size; ++i)
  {
    std::cout << array1[i] << std::endl;
  }
  
  delete[] array1; // you have to delete the array
  
  // std vector
  std::vector values2 = CreateVector();
  
  for (auto element : values2)
  {
    std::cout << element << std::endl;
  }
}

 

Range check

One of the base concepts of c and c++ is to provide build in types without any overhead which could result in performance loss or higher memory usage. Therefore, such types do not contain a management layer with range checks and similar features. Of course, this missing secure access layer could result in implementation errors which cause undefined behavior of your application.

The std::vector expands this concept and provides a combination of the advantages of both ideas. In favor of performance it comes without any management layer in release build and on the other hand it offers a secure access layer in debug builds. If you create a debug build, the std::vector contains range checks. This will help to find implementation errors. The following source code shows an according example. If you execute this example as debug build, the erroneous std::vector access will throw an out of range exception.

int _tmain(int argc, _TCHAR* argv[])
{
  // c-style array
  int values1[3] = { 1, 2, 3 };

  values1[5] = 15;   // undefined behavior
   
  // std vector
  std::vector values2 {4, 5, 6};
   
  values2[5] = 25;  // out of range exception in debug build  
}

 

Standard algorithms

The std::vector provides an STL-consistent interface and can be used with the STL algorithms. The STL algorithms also provide support for c-style arrays. If you have c-style arrays with known size, you can directly use them together with the algorithms. As shown in the previous examples, pointer decay will result in a loss of the size information and therefore in a loss of direct support of algorithms. But you can use a template based approach to keep the size information of a c-style method parameter.

The following example shows how to use both container types in STL algorithms. In this case the sort algorithm is used. Furthermore, it shows an implementation to pass a c-style array as a reference to template function, which keeps the size information and allows the use of STL algorithms.

// template based function which keeps size information 
// of c-style array and therefore allows to use STL algorithms
template 
void SortArray(T(&values)[Size])
{
  std::sort(std::begin(values), std::end(values));  
}

int _tmain(int argc, _TCHAR* argv[])
{
  // c-style array
  int values1[3] = { 7, 5, 3 };
  
  std::sort(std::begin(values1), std::end(values1));

  for (auto element : values1)
  {
    std::cout << element << std::endl;
  }
  
  // std vector
  std::vector values2 = { 7, 5, 3 };

  std::sort(values2.begin(), values2.end());
  
  for (auto element : values2)
  {
    std::cout << element << std::endl;
  }
}

 

Memory usage

A std::vector is a wrapper for a c-style array with additional management data needed for the grow and shrink functionality. This management data needs a few bytes extra space. Therefore, compared with a c-style array, the std::vector needs more memory. In most cases this extra memory can be ignored especially if you use large containers. Moreover, if you want to implement a c-style array which can grow and shrink you have to store additional management data too.

In my opinion, there is one use case where this higher memory usage will be crucial: You need a lot of small containers with fixed size during lifetime but variable size on initialization. In this case the grow and shrink functionality of the std::vector with its according management data is unnecessary overhead as it is not needed during lifetime of the container. It is only needed because the initialization size of the container is not known at compile time. If you have to create many of such containers and if they contain a small amount of data, a c-style array may be a good choice. But even in this case I would recommend the use of a c-style array only if you have a high memory workload which results in real issues. If you don’t have measurable issues you should use a std::vector anyway. The higher memory need is a small price which you should pay to get all the nice advantages of a std::vector which will help you to create more robust applications.

 

Performance

The C++ core guidelines suggest to use a std::vector instead of a c-style array (see “SL.10: Prefer using STL array or vector instead of a C array”). That’s because the std::vector offers a lot of advantages without a performance loss. Therefore, it does have the same performance as a raw c-style array.

But unfortunately, there is a persistent myth that for run-time speed, one should use c-style arrays. One reason for this myth are examples which compare c-style array and std::vector speed. Such examples will sometimes show a huge performance difference and they explain this difference as result of the std::vector management functionality for the size flexibility. These examples have one common problem: they use the std::vector in a wrong way. This poor implementation causes the performance loss. Following we will see some typical examples.

A second reason for this myth to persist is when comparing c-style arrays and std::vector in debug mode. As shown previously, the std::vector does range checks during debug mode. Of course, this will reduce the performance. The c-style array performance instead is indifferent between debug and release mode. Therefore, performance comparisons in debug mode are not meaningful.

As mentioned before, we will now look at some typical implementation issues which reduce the performance of a std::vector. After I have seen some of the examples with supposedly bad performance of the std::vector, I found two main issues: slow write and slow read commands. The following source code shows a typical example for a slow write and a slow read implementation and a right way to implement it.

If you create the std::vector and fill it with data, you should set an expected size. Normally you know the approximately or average size for your container. If you set this size during creation you avoid unnecessary grow operations. The second part of the example shows two ways to access the std::vector elements: you can use the “at()” method or you can use the “[]” operator. But these are not two different spellings for the same thing. There is a different functionality too. The “at()” method will execute an additional range check. And this range check is done in release build too. Therefore, it will add functionality for a secure element access but of course, this extra functionality comes with performance costs.

int _tmain(int argc, _TCHAR* argv[])
{
  // reserve memory before adding elements
  // e.g. you know the vector will have 100 up to 150 elements, 
  // then already reserve space for 100 elements
  std::vector values;
  values.reserve(100);
   
  int x = 125;  // example only, will be a variable size of 100 to 150
  for (int i = 0; i < x; i++)
  {
    values.push_back(i);
  }

  // dont use at() except you explicitly want a range check
  int element1 = values.at(17);   // range check on debug and release build
  int element2 = values[17];      // no range check
}

 

By avoiding these two implementation errors, a lot of the examples showing performance issues can be proved wrong. Let me mention two more performance thoughts before we finish this topic. Another important fact is that you should try to avoid inserting an element in front of the std::vector. For an insert at the front, a copy of the whole vector is necessary. If you cannot avoid such inserts you may use some other container. But do performance measurements in this case. Other containers lie a list will support an insert at the front but they come with a different memory management which results in performance loss in other situations. Therefore, even for a use case with (sporadic) insert at front, the std::vector may be the best choice.

A further performance improvement can be reached by using “emplace_back()” instead of “push_back()” while inserting into a std::vector. To keep it simple you can say: emplace back moves an object into a container where push back copies it. If the object supports this movement you will get a performance gain. You don’t have to worry about whether the object supports movement or not. If it isn’t supported the function will copy it anyway. So, emplace back can be used in any case and it will be as fast as or faster as push back, depending on the object you want to add to the vector.

 

Summary

If you need a container with variable size, your choice will be easy for normal use case. Let me repeat Bjarne Stroustrup’s suggestion: “By default, use Vector when you need a container”. I would recommend the same and suggest using another container type only if it adds some benefit in your special use case. And please keep in mind: This benefit must be measurable, especially if you expect a performance gain. Unfortunately, often the choice for a container type is based on belief and not on measurable facts. Finally I will finish the article with a short summary of the pros and cons of a std::vector compared to a c-style array.

Advantages of std::vector

  • Perfectly matches with the strict type system concept of c++
  • Copy semantic / no pointer decay / the array doesn’t convert to a pointer unless you explicitly ask it to
  • It knows its size
  • Range check in debug builds
  • No memory overhead beyond what it needs to hold its elements
  • Same performance as a c-style array
  • Direct support of use in STL algorithms

Disadvantages of std::vector

  • Slightly higher memory need as additional management data must be stored. This disadvantage is only true in case you have the second of the three main use cases shown at the beginning, with a variable size initialization but a fixed size during use of the container.
Veröffentlicht unter C++ | Kommentar hinterlassen

std::array vs c-style array

Within this article I want to compare the base array type and the array offered by the std library. I want to call these two types “c-style array” and “std::array”. The comparison will show you the pros and cons of the different types and give you a guideline when to use which kind of array.

Base concept of std::array

The std::array is designed as zero-overhead wrapper for c-style arrays. It will provide a value like semantics equally to the other C++ containers. A std::array should have same runtime performance as a c-style array.

Within the next chapters I want to compare both array types by looking at some base use cases.

Initialize, change and read

The following example application shows the initialization of an array with three elements. Furthermore, an array element will be changed and all elements will be read.

int _tmain(int argc, _TCHAR* argv[])
{
  // c-style array
  int array1[3] = {1, 2, 3};

  array1[1] = 15;

  std::cout << "size: " << (sizeof(array1) / sizeof(*array1)) << std::endl;

  for (auto element : array1)
  {
    std::cout << element << std::endl;
  }
  
  // std-style array
  std::array array2 = { 4, 5, 6 };

  array2[1] = 25;

  std::cout << "size: " << array2.size() << std::endl;

  for (auto element : array2)
  {
    std::cout << element << std::endl;
  }
}

Within this first example you can see that the syntax for the variable declaration is slightly different, but element access and usage of array in a range based for loop is the same for both types. Furthermore, the example shows a first big difference of both types: the std::array knows its size. For the c-style array you have to create an additional variable to store the array size or, as shown in the example, you can calculate the size by dividing the memory reserved for the array by the memory needed to store a single element.

Array as method parameter

In a real application, you will split up your program logic into functions and therefore a standard use case is to pass parameters to functions. So, let’s change the example a little bit and create functions which should print the array.

void PrintArray(int* values, int size)
{
  std::cout << "size: " << size << std::endl;

  for (int i = 0; i < size; ++i)
  {
    std::cout << values[i] << std::endl;
  }
}

void PrintArray(const std::array &values)
{
  std::cout << "size: " << values.size() << std::endl;

  for (auto& value : values)
  {
    std::cout << value << std::endl;
  }
}

int _tmain(int argc, _TCHAR* argv[])
{
  int array1[3] = { 1, 2, 3 };
  PrintArray(array1, 3);
  
  std::array array2 = { 4, 5, 6 };
  PrintArray(array2);
}

This simple use case shows a very important difference between a c-style array and a std::array. The c-style array will be passed as pointer to its first parameter. The std::array instead offers the standard copy semantic. Therefore, you can use standard coding practices and for example pass the parameter by const reference. So, the std::array doesn’t convert to a pointer unless you explicitly ask it to.

A c-style array is passed as pointer to the first element. This principle is called “pointer decay”. As the array decays to a pointer, the size information is loss. Therefore, we have to pass an additional function parameter which contains the array size. The previously shown calculation depending on the memory size will no longer work, as the memory size of the passed parameter is the size of a pointer now. Furthermore, as the size information is lost, the range based for loop cannot be used anymore.

You can see, this little design change of the application, has nearly no impact if you use a std::array. You can simple refactor your code and extract the according code section as method. But if you try to do the same for a c-style array, you will get in trouble and as result of the pointer decay you have to change your implementation fundamentally. Within the following article you can see some other impacts of the pointer decay, especially in base and derived class relationships: Arrays and inheritance, a source of errors.

std::array as method parameter with variable size

Maybe you already noticed one limitation of the print function we implemented for the std::array. The function will accept an array with three integer values only. It will not work with array of other types or other sizes.

At first you may think this is a limitation or disadvantage of std::array. But it isn’t. Moreover it is a big advantage of this data type as std::array will perfectly match with the C++ base design which uses a strict type system. We have a fixed size as part of type declaration. So each std::array with a different size is a different type. This will allow the compiler to detect and prevent implementation errors.

Of course, a strict type system will make it difficult to create higher-order functions. For these cases c++ offers function overloading, for example by implementing templates.

The following source code shows a template based implementation which allows to write a print function which accepts a std::array of any size. Furthermore, you will see a second function which is even more generic as it accepts any kind of container object.

template
void PrintArray(const std::array &values)
{
  for (auto value : values)
  {
    std::cout << value << std::endl;
  }
}

template
void PrintContainer(T &values)
{
  for (auto value : values)
  {
    std::cout << value << std::endl;
  }
}

int _tmain(int argc, _TCHAR* argv[])
{
  std::array array2 = { 4, 5, 6 };
  PrintArray(array2);
  PrintContainer<std::array>(array2);
}

c-style array as method parameter with size information

Of course, the template concept can also be used for a c-style array. Instead of passing a pointer to the first element, you can pass the array as a reference to template function. This will allow to pass the array with its specific size.

template 
void PrintArray(T(&values)[Size])
{
  for (auto value : values)
  {
    std::cout << value << std::endl;
  }
}

int _tmain(int argc, _TCHAR* argv[])
{
  int array1[3] = { 1, 2, 3 };
  PrintArray(array1);
}

Array as return value

C-style arrays are among the few types which cannot be used as L-value. So, c-style arrays cannot be used on the left side of an assignment and therefore they cannot be used as return values too. But you can return a pointer to a c-style array. Unfortunately, this will result in with two big issues: you have to know the size of the created array outside of the method and you have to implement the resource management based on this wild pointer. Both issues are a good source for errors.

A std::array supports the c++ copy semantic. Therefore, it can be used as return value. The following example shows possible implementation for both kinds of arrays.

int* CreateArray()
{
  int* array = new int[3] { 1, 2, 3 };
  return array;
}

std::array CreateStdArray()
{
  std::array array = { 4, 5, 6 };
  return array;
}

int _tmain(int argc, _TCHAR* argv[])
{
  // c-style array
  int* array1 = CreateArray();
  int size = 3;   // you have to know the size of the array

  for (int i = 0; i < size; ++i)
  {
    std::cout << array1[i] << std::endl;
  }

  delete[] array1; // you have to delete the array
   
  // std-style array
  std::array array2 = CreateStdArray();

  for (auto element : array2)
  {
    std::cout << element << std::endl;
  }
}

In both cases you can create factory methods for arrays of any length. In case of a c-style array I would prefer the length as input parameter and in case of the std::array I would suggest the template based approach, shown in a previous section about method parameters.

If the client does not know the resulting size of the array and the factory method itself should be responsible to create a collection of the needed size, you should not use a std::array at all. The array is a collection of fixed size but you want to use a collection of variable size. This sound like a use case for the std::vector. At this moment, I don’t want to go further into details as this doesn’t match with the topic of this article. But in a further article I want to continue with a comparison of c-style array and std::vector.

Range check

One of the base concepts of c and c++ is to provide build in types without any overhead which could result in performance loss or higher memory usage. Therefore, arrays do not contain a management layer with range checks and similar features. Of course, this missing secure access layer could result in implementation errors which cause undefined behavior of your application.

The std::array expands this concept and provides a combination of the advantages of both ideas. In favor of performance it comes without any management layer in release build and on the other hand it offers a secure access layer in debug builds. If you create a debug build, the std::array contains range checks. This will help to find implementation errors. The following source code shows an according example. If you execute this example as debug build, the erroneous std::array access will throw an out of range exception.

int _tmain(int argc, _TCHAR* argv[])
{
  // c-style array
  int array1[3] = {7, 5, 3};
  
  std::sort(std::begin(array1), std::end(array1));

  for (auto element : array1)
  {
    std::cout << element << std::endl;
  }
  
  // std-style array
  std::array array2 = { 7, 5, 3 };

  std::sort(array2.begin(), array2.end());
  
  for (auto element : array2)
  {
    std::cout << element << std::endl;
  }
}

Performance and memory usage

As mentioned before, the std::array is designed as zero-overhead wrapper for c-style arrays. Therefore, both types are optimized regarding performance and memory usage. The std::array has an identical memory layout to c-style arrays. Furthermore, they have the same performance, except in debug builds. As shown in the previous chapter, the std::array contains range checks in debug build and therefore it has a slightly lower performance. In release build the performance of std::array and c-style array are same. By definition, the std::array is a simple aggregate containing a c-style array as its only member.

Array and standard algorithms

The std::array provides an STL-consistent interface and can be used with the STL algorithms. The STL algorithms also provide support for c-style arrays. If you have c-style arrays with known size, you can directly use them together with the algorithms. As shown in the previous examples, pointer decay will result in a loss of the size information and therefore in a loss of direct support of algorithms. But again, you can use the template based approach to keep the size information of a c-style method parameter.

The following example shows how to use both array types in STL algorithms. In this case the sort algorithm is used.

int _tmain(int argc, _TCHAR* argv[])
{
  // c-style array
  int array1[3] = {1, 2, 3};

  array1[5] = 15;   // undefined behavior
   
  // std-style array
  std::array array2 = { 4, 5, 6 };

  array2[5] = 25;   // out of range exception in debug build
}

Multi-dimensional arrays

The std::array as well as the c-style array support multi-dimensions. These multi-dimensions are designed as arrays of nested arrays. The syntax to declare a std::array directly reflects this nested design. But this comes with the downside of a more awkward syntax than compared to a c-style array. The following example shows how to declare, write and read a multi-dimensional array.

int _tmain(int argc, _TCHAR* argv[])
{
  // c-style array
  int array1[2][3] = { { 1, 2, 3 }, { 4, 5, 6} };

  array1[0][1] = 10;
  
  for (auto &row : array1)
  {
    for (auto &element : row)
    {
      std::cout << element << std::endl;
    }
  }     

  // std-style array
  std::array<std::array, 2> array2{ { { 7, 8, 9 }, { 10, 11, 12 } } };

  array2[0][1] = 20;

  for (auto &row : array2)
  {
    for (auto &element : row)
    {
      std::cout << element << std::endl;
    }
  }
}

Summary

The std::array is designed as zero-overhead wrapper for c-style arrays. Therefore, it comes with all the advantages of a c-style array and ads the additional advantages of C++ containers. I would recommend to use std::arrays except you have a rare special use case which comes with good reasons to use a c-style array.
Advantages of std::array

  • Perfectly matches with the strict type system concept of c++
  • Copy semantic / no pointer decay / the array doesn’t convert to a pointer unless you explicitly ask it to
  • It knows its size
  • Range check in debug builds
  • No memory overhead beyond what it needs to hold its elements
  • Same performance as a c-style array
  • Direct support of use in STL algorithms

Disadvantages of std::array

  • Slightly more awkward syntax for declaration of the array, especially when creating multi-dimensional arrays
Veröffentlicht unter C++ | 1 Kommentar

STL Algorithms

The STL algorithms offer a powerful way to solve some base programming use cases. If you know and use these algorithms you can write more clean, efficient and robust code. But unfortunately, it isn’t that easy to know or find the right algorithm for a use case. In my opinion, this is a result of the not practical kind of documentation – especially grouping of the algorithms is not related to practice – and on the other hand the naming of some functions. Therefore, I wrote a series of articles with a practice-related overview about the STL algorithms. Following you will find the overview about this article series with links to the articles.

Algorithms to use, find and check container elements

Algorithms to copy, move and remove container elements

Algorithms to change and sort container elements

Algorithms for sorted containers

Algorithms for two containers

Operations which can be used in STL algorithms

Create own STL algorithms wrapper

Veröffentlicht unter C++ | Kommentar hinterlassen