Speech breathing is very different from tidal breathing both with respect to its overall amplitude and the duration of inspiration and expiration phases. In terms of these two phases, speech breathing is strongly asymmetrical, with the expiration phase easily exceeding 2 to 4 seconds. This extended expiration time window is the foundation on which pauses, syllables and stress groups are built. In the first part of this lecture, experimental data from read and spontaneous speech in different speaking styles is presented to show how duration patterns of pauses, syllables and stress groups change cross-stylistically and how breathing patterns are adjusted by speakers to meet these stylistic changes and to support the realization of prominences and phrase boundaries in each style. For example, recent contrastive analyses of breathing patterns in read and narrated speech in Brazilian Portuguese revealed style-specific changes in inspiration amplitude and duration as well as in breath group duration and, moreover, gender-specific differences within these stylistic changes. Reading is also characterized by a higher frequency of inspiration, whereas narration shows a higher amplitude of inspiration. In the second part of this lecture, it is demonstrated that the relatively consistent and style-specific pause and prominence patterns allowed developing an algorithm for the automatic detection of terminal boundaries (of utterances in speech-act terms) in read and narrated speech. The algorithm was successfully tested for Brazilian Portuguese (BP), European Portuguese (EP), French and German. The audio file alone is sufficient for the algorithm to work, i.e. it does not require any previous transcription or linguistic-analysis efforts. The algorithm operates in two stages: first, it detects vowel onsets (VO) and then, it normalizes VO-to-VO interval durations in order to obtain smoothed z-scores; z-score peaks that exceed the threshold value of 2.5 are considered terminal boundaries. Proportion of hits for reading (circa 70 %) are better than for narration (circa 60 %) for all languages, with performance levels generally being higher for EP and French.