HigherOrder Differential Equations
Reduction of Order (homogeneous, linear ODEs with constant or variable coefficients)
To be honest, this method is more useful in deriving other methods than it is for solving ODEs. The reason is that you must already have a solution to your ODE in order to Reduction of Order. The upside is that it is one of the few methods that can be used to help solve higherorder ODEs with variable coefficients.
Suppose you are given a homogeneous ODE that can be rewritten in the form\[y'' + b_1(x) y' +b_0(x) y = 0,\] where \(b_0(x)\) and \(b_1(x)\) are continuous. Moreover, suppose we have the resources to find (or we have been given) one solution of this ODE, \(y_1(x)\). Then we may assume that the second solution, which must be linearly independent of the first solution, can be written in the form\[y_2(x) = u(x) y_1(x).\]I ask that you review our lecture notes to verify that \(u(x)\) can be found and that\[u(x) = \int{\frac{e^{\int{b_1(x)dx}}}{y_1^2(x)}dx}.\] Hence, \[y_2(x) = y_1(x) \int{\frac{e^{\int{b_1(x)dx}}}{y_1^2(x)}dx}.\]
WARNING! It is incredibly important that the given ODE be placed in standard form prior to doing Reduction of Order.
To be honest, this method is more useful in deriving other methods than it is for solving ODEs. The reason is that you must already have a solution to your ODE in order to Reduction of Order. The upside is that it is one of the few methods that can be used to help solve higherorder ODEs with variable coefficients.
Suppose you are given a homogeneous ODE that can be rewritten in the form\[y'' + b_1(x) y' +b_0(x) y = 0,\] where \(b_0(x)\) and \(b_1(x)\) are continuous. Moreover, suppose we have the resources to find (or we have been given) one solution of this ODE, \(y_1(x)\). Then we may assume that the second solution, which must be linearly independent of the first solution, can be written in the form\[y_2(x) = u(x) y_1(x).\]I ask that you review our lecture notes to verify that \(u(x)\) can be found and that\[u(x) = \int{\frac{e^{\int{b_1(x)dx}}}{y_1^2(x)}dx}.\] Hence, \[y_2(x) = y_1(x) \int{\frac{e^{\int{b_1(x)dx}}}{y_1^2(x)}dx}.\]
WARNING! It is incredibly important that the given ODE be placed in standard form prior to doing Reduction of Order.
Characteristic Equations (linear, homogeneous ODEs with constant coefficients)
This is the best method to use with higherorder, linear, homogeneous ODEs that have constant coefficients. It is simple and elegant.
Suppose you are asked to solve an ODE that can be written in the form
This is the best method to use with higherorder, linear, homogeneous ODEs that have constant coefficients. It is simple and elegant.
Suppose you are asked to solve an ODE that can be written in the form
\(y^{(n)} + b_{n1}y^{(n1)} + b_{n2}y^{(n2)} + \cdots + b_1 y' + b_0 y = 0,\)

(1)

where \(b_i\) is constant for \(i = 0, 1, 2, \ldots, n  1\).
The associated characteristic equation is
The associated characteristic equation is
\(m^n + b_{n1}m^{n1} + b_{n2}m^{n2} + \cdots + b_1 m + b_0 = 0.\)

(2)

Let \(m_1, m_2, \ldots, m_n\) be the \(n\) solutions to (2).
 If \(m_i\) is a unique root, then \(x_i = c_i e^{m_i x}\) is a solution of (1).
 If \(m_i\) is a root of (2) of multiplicity \(k\), then \(x_i = c_{i,0} e^{m_i x} + c_{i,1} x e^{m_i x} + c_{i,2} x^2 e^{m_i x} + \cdots + c_{i,k1} x^{k1} e^{m_i x}\) is a solution of (1).
 If \(m_i = \alpha + i \beta\) is a complexvalued root of (2), then \(x_i = e^{\alpha x} \left[c_{i,0} \cos{(\beta x)} + c_{i,1} \sin{(\beta x)}\right]\) is a solution of (1).
Annihilators (linear, nonhomogeneous ODEs with constant coefficients)
This method and Variation of Parameters go handinhand. It is sometimes a choice of personal preference between the two methods; however, using the Annihilator method is dependent on the driving function. As long as \(g(x)\) has the form \(x^k\), \(\cos{(\beta x)}\), \(\sin{(\beta x)}\), \(e^{\alpha x}\), or a polynomial or exponential multiple of any of these functions, then the Annihilator method should be considered as an option.
To keep this review somewhat short, I recommend you look at the theory of differential operators to remind yourself of why they work. Anyhow, given the nonhomogeneous ODE
This method and Variation of Parameters go handinhand. It is sometimes a choice of personal preference between the two methods; however, using the Annihilator method is dependent on the driving function. As long as \(g(x)\) has the form \(x^k\), \(\cos{(\beta x)}\), \(\sin{(\beta x)}\), \(e^{\alpha x}\), or a polynomial or exponential multiple of any of these functions, then the Annihilator method should be considered as an option.
To keep this review somewhat short, I recommend you look at the theory of differential operators to remind yourself of why they work. Anyhow, given the nonhomogeneous ODE
\(a_n y^{(n)} + a_{n1} y^{(n1)} + \cdots + a_1 y' + a_0 y = g(x),\)

(3)

where \(a_i\) are constant for \(i = 0, 1, \ldots, n\), we first solve the associated homogeneous ODE of (3). Once this is done, we move on to determining the annihilator for \(g(x)\) and apply this to both sides of (3). The concept is easier to understand via examples, so I suggest looking through your notes for how this works.
Variation of Parameters (linear, nonhomogeneous ODEs with constant coefficients)
This method is the "catchall" that can be used with any higherorder ODE with constant coefficients; however, it can be tedious  you've been warned.
The basis of this method is derived from Reduction of Order. I recommend that you read the text or review your notes for the full derivation. I will only talk about the use of the method here. The punchline is that, if we can find the complimentary solution to the homogeneous ODE, \(y_c = c_1 y_1 + c_2 y_2 + \cdots + c_n y_n\), we create the particular solution to the original ODE by finding functions \(u_k(x)\) such that \(y_p = u_1(x) y_1 + u_2(x) y_2 + \cdots + u_n(x) y_n\).
Given the nonhomogeneous ODE (3) with constant coefficients, we first rewrite in the standard form
This method is the "catchall" that can be used with any higherorder ODE with constant coefficients; however, it can be tedious  you've been warned.
The basis of this method is derived from Reduction of Order. I recommend that you read the text or review your notes for the full derivation. I will only talk about the use of the method here. The punchline is that, if we can find the complimentary solution to the homogeneous ODE, \(y_c = c_1 y_1 + c_2 y_2 + \cdots + c_n y_n\), we create the particular solution to the original ODE by finding functions \(u_k(x)\) such that \(y_p = u_1(x) y_1 + u_2(x) y_2 + \cdots + u_n(x) y_n\).
Given the nonhomogeneous ODE (3) with constant coefficients, we first rewrite in the standard form
\(y^{(n)} + b_{n1}y^{(n1)} + b_{n2}y^{(n2)} + \cdots + b_1 y' + b_0 y = \frac{1}{a_n} g(x).\)

(4)

WARNING! Rewriting (3) in standard form is necessary to properly perform Variation of Parameters. Be certain to do this.
Solve the associated homogeneous ODE of (4) to obtain the complementary function \(y_c = c_1 y_1 + c_2 y_2 + \cdots + c_n y_n\). Once found, calculate the Wronksian, \(W(y_1, y_2, \ldots, y_n)\), and the minor Wronskians, \(W_1(\vec{f},y_2,y_3,\ldots,y_n)\), \( W_2(y_1,\vec{f},y_3,\ldots,y_n)\) , \(\ldots , W_n(y_1,y_2,\ldots,y_{n1},\vec{f})\), where \(\vec{f}\) is a column vector with 0 in each element except the final spot, which is \(\frac{1}{a_n} g(x)\).
Using Cramer's Rule, we find that
Using Cramer's Rule, we find that
\(u_k'(x) = \frac{W_k}{W}.\)

(5)

Integrating each of the \(u_k'(x)\) in (5), we arrive at our mysterious coefficient functions \(u_k(x)\). We can now state the particular and general solutions\[y_p = u_1(x)y_1 + u_2(x)y_2 + \cdots + u_n(x)y_n,\]\[y = y_c + y_p.\]
CauchyEuler Equations (not to appear on the final exam)
Solutions via Power Series (linear, homogeneous ODEs with constant or variable coefficients)
This is a method of last resort, but you might be forced to solve an ODE via power series about an ordinary point.
Given the linear ODE
This is a method of last resort, but you might be forced to solve an ODE via power series about an ordinary point.
Given the linear ODE
\(a_n(x) y^{(n)} + a_{n1}(x)y^{(n1)} + \cdots + a_1(x)y' + a_0(x) y = 0,\)

(6)

we suppose that we can get a solution of the form
\(y = \sum_{n = 0}^{\infty}{c_n (x  a)^n}.\)

(7)

Inserting (7) and its derivatives into (6), taking out all uncommon powers of \(x\), reindexing to condense the resulting equation to contain some terms plus a single infinite summation, and equating each of the resulting coefficients of \(x^k\) to 0 (for \(k \in \mathbb{N} \cup \{0\}\)) will allow us to build a relationship between the coefficients of (7). For each degree of freedom (i.e., for each free parameter \(c_k\)), there will be a set of corresponding coefficients.
Laplace Transforms (linear, nonhomogeneous [or homogeneous], with constant or variable coefficients where the driving function can be piecewisedefined)
Laplace transforms are normally treated as a single major topic in and of themselves. This is because of the amount of background theory necessary to operate with them correctly. What could be expected of you other than solving differential equations using Laplace transforms on the final exam? Well, you should know the properties of the Laplace transform from the Laplace transform page.
Laplace transforms are normally treated as a single major topic in and of themselves. This is because of the amount of background theory necessary to operate with them correctly. What could be expected of you other than solving differential equations using Laplace transforms on the final exam? Well, you should know the properties of the Laplace transform from the Laplace transform page.