# Reversal of a regular language, revisited

One of the most classical problems of regular expression might be the reverse of a regular language. Most solutions are based on finite state machines, that is, constructing a FSM by reversing all the arrows, adding epsilon moves to original accept states, and transforming this NFA to DFA.

However, it seems messy once you try to write down all the detail such as defining “adding epsilon moves to accept states”. Say,

$\delta' (q_0, \epsilon) = F$

where $F$ is the accept states in the original one. Also we need reversing like the following,

$\delta'(q, x) = \{ q : \exists p(\delta(p, x) = q)\}.$

$\delta'(q, x) = \{ p : \delta(p, x) = q\}.$

As it is labelled by alphabets, it cannot be written as inverse image directly. It goes on to complete all the detail in this way. After completing the construction, it is the time to prove that it works. So, we may need to define strings as finite sequences, and give a mathematical definition of the “reverse” such as $s^{r}$ is a reverse of $s$ if $s^{r}_i = s_{n - i}$ where $n$ is the length of the string.

Then we are going to prove that if  there is a string $s$ accepted by the original machine, then $s^{r}$ is accepted by the new one and the new one does not accept any new string other than the reverses of the old one. To be honest, it’s boring to solve a problem by an ad hoc method.

However, as we know the equivalence of FSM and regular expression, why don’t we do it over r.e. ? It just costs few lines to define a reversal by valid regular operations as follows.

$\mathcal{R} : RE \rightarrow RE$
$\mathcal{R}(\epsilon) = \epsilon$
$\mathcal{R}(a) = a$
$\mathcal{R}(pq) = \mathcal{R}(q) \mathcal{R}(p)$
$\mathcal{R}(p^{*}) = \mathcal{R}(p)^{*}$
$\mathcal{R}(p | q) = \mathcal{R}(p) | \mathcal{R}(q)$

for any $p, q \in RE$ and $a \in \Sigma$.

The only non-trivial thing we need to prove is the third clause, but it is very intuitive to prove it by structural induction (or initial algebra). Also, it is defined structurally and we can play the same trick to other constructions like the intersection of regular languages(?) (Sorry, I can’t make it.)  The point is that we do not only solve a problem but also have a idiom to solve a series of problems in common!

By the way when I was an undergraduate, my teaching assistant gave me zero to this answer because he said that he didn’t see anyone doing this before and he didn’t understand why the structural induction works. Ridiculous!

p.s.  I noticed that this way is used in Introduction to Automata Theory, Languages, and Computation, by John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman.