Files
masterthesis/thesis/sections/preliminaries.tex

53 lines
6.3 KiB
TeX

\section{Preliminaries}
\subsection{Code-based reduction proofs}
To perform the security proof of the EdDSA signature scheme code based game playing proofs are used as introduced in \cite{EC:BelRog06}. For this proofs an adversary is tasked to play (and win) against a predefined game. The game is defined by a set of instructions which are executed consecutively. At one point the game calls the adversary with some input and gets some output from the adversary. The game then decides, depending on the output of the adversary, whether the adversary won or not. In addition the adversary might get oracle access to one or more procedures, meaning that the adversary is only able to observe the output of the procedure call given a specific input. Those procedures are called oracles. The advantage of the adversary in a game describes its ability to win the game more reliably than using generic attacks (e.g. guessing the answer to the game).
During the proof these games are being modified until an adversary against the modified game can also be used as an adversary against another game. This method is called a reduction proof. It shows that one problem (described by one game) can be reduced to another problem. In other words it says that if problem A can be reduced onto problem B any algorithm solving problem A can be transformed into an algorithm solving problem B.
\subsubsection{Identical-Until-Bad Games}
While modifying the games it has to be ensured that the advantage for an attacker to distinguish between the original and modified game is negligible. This can be achieved by constructing so called identical-until-bad games.
\begin{definition}[identical-until-bad games \cite{EC:BelRog06}]
Two games are called identical-until-bad games if they are syntactically equivalent except for instructions following the setting of a bad flag to true.
\end{definition}
\begin{lemma}[Fundamental lemma of game-playing \cite{EC:BelRog06}]
Let G and H be identical-until-bad games and let $\adversary{A}$ be an adversary. Then,
\[ Adv(G^{\adversary{A}}, H^{\adversary{A}}) = |\prone{G^{\adversary{A}}} - \prone{H^{\adversary{A}}}| \leq \Pr[bad] \]
\end{lemma}
This means that the advantage to distinguish between two identical-until-bad games is bound by the probability of the bad flag being set.
\input{sections/notation}
\input{sections/security_notions}
\subsection{Elliptic Curves}
\subsection{Random Oracle Model (ROM)}
\label{sec:rom}
Some of the following proofs are conducted in the random oracle model. The random oracle model was introduced by Bellare and Rogaway in 1993 \cite{CCS:BelRog93}. In the random oracle model some primitives (in this case hash functions) are modeled as public random oracles. This means that instead of calling the hash function, the adversary has to call the random oracle provided by the challenger. This random oracle must behave like a true random function.
To simulate a truly random function in polynomial time, a process called "lazy-sampling" can be used. Lazy-sampling means that the challenger has a table that starts out empty. When the adversary queries a value from the random oracle, the challenger checks if that input is in the table. If the input is in the table, the challenger returns the output value according to the table. Otherwise, the challenger chooses an output value from a uniform random distribution and inserts it into the table for that particular input value. The challenger then returns that value.
This method allows the challenger to observe and influence the behavior of the adversary. Since the random oracle behaves like a truly random function, the adversary must query the random oracle to know the output value for a given input value. Therefore, the challenger can observe any input value to the random oracle. Also, the challenger has the ability to program specific output values of the random oracle, as long as it is correctly distributed and is consistent. Consistent means that at no time should the random oracle output two different values for the same input value.
%TODO: Kann man das so schreiben?
Especially the programmability of the random oracle will be used in the following proofs and should be kept in mind.
\subsection{Algebraic Group Model (AGM)}
The algebraic group model was introduced in 2018 by Fuchsbauer et al. \cite{C:FucKilLos18}. In the algebraic group model, all adversaries are modeled as being algebraic. This means that the adversary has to know a representation for each group element regarding all group elements the adversary received from the challenger. This representation has to be provided to the challenger for every group element the adversary outputs or inputs as an oracle parameter. For example, if the adversary receives the group elements $\groupelement{A}$ and $\groupelement{B}$ from the challenger and at one point outputs group element $\groupelement{C}$ the adversary also has to output a vector $\overset{\rightharpoonup}{c} = (c_1, c_2)$ which satisfies: $\groupelement{C} = c_1 \groupelement{A} + c_2 \groupelement{B}$. For the game proofs the group element $\groupelement{C}$ and its representation $\overset{\rightharpoonup}{c}$ is denoted as $\agmgroupelement{C}{c}$.
\subsection{Generic Group Model (GGM)}
Unlike the random oracle model or the algebraic group model the generic group model is not used to construct reductions from one problem to another. Rather, it is used to obtain an information-theoretic lower bound on the complexity of generic adversaries against a given problem. Generic algorithms are algorithms that perform only the defined group actions on group elements and do not exploit group-specific representations of the element.
The generic group model was first introduced by Shoup in 1997 \cite{EC:Shoup97}. In this paper, Shoup proved an information-theoretic lower bound for the discrete logarithm problem. He did that by replacing group elements with labels that are random bit strings. In this way he hid all group-specific representations of the elements. Group actions are only possible via oracles, which are provided to the adversary by the challenger. The only action the adversary can perform on its own is to compare elements for equality by comparing labels.
In 2005, Maurer proposed an alternative proposed an alternative definition of the generic group model \cite{IMA:Maurer05}. The proofs conducted in this thesis will use the generic group model as defined by Shoup.