In Java 8, streams use a technique called “lazy evaluation” to process data. Lazy evaluation means that a stream operation is not executed until it is absolutely necessary. This can lead to significant performance gains when working with large data sets.

When you create a stream, it does not immediately process the data. Instead, it creates a “pipeline” of operations that will be performed on the data. The pipeline is not executed until a terminal operation is called on the stream, such as forEach, toArray, count, or collect.

For example, consider the following code:

List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5);
Stream<Integer> numberStream = numbers.stream();
numberStream = numberStream.filter(n -> n > 2);
numberStream = numberStream.map(n -> n * 2);
numberStream.forEach(System.out::println);

When the stream is created, it does not immediately filter or map the elements. Instead, it creates a pipeline of operations that will be performed when the terminal operation (forEach) is called.

In this case, the filter() and map() operations are both intermediate operations, they transform the data and return a new stream, but they don’t perform any computation on the data, they are just describing the computation to be done.

Only when the terminal operation is called, the stream evaluates the pipeline of operations by applying all the intermediate operations one-by-one, to the elements of the stream.

Advantages:

The advantage of this approach is that a stream can stop processing as soon as it has found the result it needs. For example, if you use the findFirst() method on a stream, it will stop processing as soon as it finds the first element that matches the given predicate, rather than processing all the elements.

Another advantage is that a stream can process data in parallel. If you call the parallelStream() method, the stream will automatically partition the data into smaller chunks and process them in parallel using multiple threads.

Lazy evaluation is one of the key features of Java 8 streams that makes them more efficient and powerful when working with large data sets.