Preprint
Article

This version is not peer-reviewed.

Differential Quasilinearization Method for Solving Initial Value Problems of Ordinary Differential Equations

Jia Qiu  *

Submitted:

26 January 2025

Posted:

28 January 2025

You are already at the latest version

Abstract
The differential quasilinearization method (DQM) is presented as an effective technique for obtaining approximate analytical solutions to ordinary differential equations (ODEs). This method converts general ODEs into a sequence of linear ODEs, facilitating efficient computation of successive approximations. As demonstrated by examples in the text, DQM is especially well-suited for addressing implicit or nonlinear ODEs.
Keywords: 
;  ;  

1. Introduction

It is often challenging to obtain analytical solutions to ordinary differential equations (ODEs), leading to the development of various methods for finding approximate solutions1–10. These approximation methods can be categorized into two main types: numerical methods and analytical methods. Analytical methods, in particular, are widely used in solving many scientific and engineering problems due to their straightforward formulation, good continuity, and clearer physical interpretation.
Among the analytical methods, successive approximation techniques are especially valued for their simplicity, broad applicability, and controllable accuracy. As a result, they have garnered increasing attention from researchers across various fields. Notable successive approximation methods include the Picard iteration method5,11,12, the Adomian decomposition method (ADM) 6,7,13–16, the homotopy analysis method (HAM) 8,17–21, the variational iteration method (VIM) 10,22–25, and the quasilinearization method 12. However, these methods typically require that the equation(s) be explicit or have a linear differential operator, that is, the equation(s) can be written in the following form
L y ( x ) = N [ x , y x ]
where L is a linear differential operator and N is either a linear or nonlinear operator. For example, consider the equation:
d y d x = f ( x , y )
In this case, the linear differential operator L can be expressed as:
L = d d x
The operator N can be expressed as:
N = f ( . , . )
Similarly, for the equation:
y + 3 y + 2 y = g ( x , y , y )
The linear differential operator L can be expressed as:
L = d 3 d x 3 + 3 d 2 d x 2 + 2 d d x
The operator N can be expressed as:
N = g ( . , . , . )
In this work, the differential quasilinearization method (DQM), free from the previously mentioned limitations, is presented for solving initial value problems of ordinary differential equations (ODEs), especially for implicit or nonlinear cases. The validity of this method are verified through two examples.

2. The basic principle of differential quasilinearization method

2.1. Quasilinearization of ordinary differential equations

To illustrate the basic concept of the DQM, we consider the following general system of ODEs:
F Y , x = 0
subject to the initial condition:
G Y x 0 = 0
where:
Y = y n y n 1 y y
y = y m y m 1 y 2 y 1
F Y , x and G Y x 0 are considered as column vectors. Taking the derivative of both sides of Equation (2.1), we obtain:
𝜕 F Y , x 𝜕 Y Y x + 𝜕 F ( Y , x ) 𝜕 x = 0
If the 𝜕 F Y , x 𝜕 Y and 𝜕 F ( Y , x ) 𝜕 x are regarded as known functions with respect to x , Equation (2.5) can be treated as a system of linear ODEs. Consequently, the initial value problem described by Equations (2.1) and (2.2) is converted to the initial value problem of ODEs with a linear differential operator, as follows:
𝜕 F Y , x 𝜕 Y Y x + 𝜕 F ( Y , x ) 𝜕 x = 0 F Y x 0 , x 0 = 0 G Y x 0 = 0
The existence and uniqueness of the solution to the initial value problem (2.6) can be determined by the principle of contraction mapping or the Picard–Lindelöf theorem11. Successive approximations of the solution can then be obtained using the Picard iteration method or other established methods applicable to ODEs with a linear part11.

2.2. Quasilinearization recursion equations of ODEs

To solve the initial value problem described by Equation (2.6), we aim to construct a sequence of vector-valued functions φ ( i ) x that converges to the solution of this initial value problem, such that:
lim i φ ( i ) x = φ x
𝜕 F Φ , x 𝜕 Φ Φ x + 𝜕 F ( Φ , x ) 𝜕 x = 0
where:
Φ = φ n φ n 1 φ φ
φ = φ m φ m 1 φ 2 φ 1
φ i = φ i , m φ i , m 1 φ i , 2 φ i , 1
By substituting the components of φ with the components in the corresponding dimension of element(s) in φ ( i ) x , a recurrence equation based on Equation (2.8) can be obtained:
A Φ k 1 , , Φ i , , Φ 0 , x Φ k = G Φ k 1 , , Φ i , , Φ 0 , x
where:
Φ i = φ i n φ i n 1 φ i φ i
Each term in the sequence φ ( i ) x can typically be deduced successively. It is important to note that in Equation (2.12), the largest subscript of Φ i in A Φ k 1 , , Φ i , , Φ 0 , x and G Φ k 1 , , Φ i , , Φ 0 , x must be smaller than k . In addition, the elements of φ ( i ) x must satisfy the initial conditions:
F Φ i x 0 , x 0 = 0 G Φ i x 0 = 0
and an initial guess for φ ( 0 ) is required. By substituting the known elements of φ ( i ) x with smaller subscripts ( k 1 , k 2 , …, 0 ) into the recursive Equation (2.12), the Equation can be viewed as a linear Equation with respect to Φ k Thus, φ i can typically be obtained by solving a system of linear differential equations. Repeating this process allows us to obtain any term in the sequence φ ( i ) x . If the initial value problem described by Equation (2.6) has a unique solution, φ i can be regarded as an approximate solution to the initial value problem.

3. A simplified version of the differential quasilinearization method: converting to lower-order quasilinear ODEs and recursive equations

3.1. General system of ODEs

Even linear ODEs can be challenging to solve when the order is relatively high, although they are generally simpler than nonlinear ODEs. To simplify the problem, Equation (2.5) can be transformed into a system of lower-order ODEs, ideally first-order ODEs.
Let:
y = y 1 y 1 = y 2 y n 1 = y n
then:
Y = y n y n 1 y 1 y
Y = y n y n y 2 y 1
Thus, Equation (2.5) can be converted into the following first-order ODEs:
y = y 1 y 1 = y 2 y n 1 = y n 𝜕 F Y , x 𝜕 Y Y x + 𝜕 F ( Y , x ) 𝜕 x = 0
and the initial value problem expressed in Equation (2.6) can be transformed into:
y = y 1 y 1 = y 2 y n 1 = y n 𝜕 F Y , x 𝜕 Y Y x + 𝜕 F ( Y , x ) 𝜕 x = 0 F Y x 0 , x 0 = 0 G Y x 0 = 0
where the expressions for Y and Y are shown in Equations (3.2) and (3.3), respectively. Next, proceed with the method outlined in last section for further processing. To provide a more intuitive introduction to the simplified version of DQM, first-order ODE and system of first-order ODEs will be used as examples for a more detailed discussion in the following sections.

3.2. First order ODE

For example, consider the ODE:
F ( y , y , x ) = 0
subject to the initial condition:
y x 0 = y 0
where F has continuous partial derivatives with respect to y , y , and x , and y has continuous second derivatives.
The solution can be found by combining Equations (3.6) and (3.7):
y x 0 = φ x 0 , y 0
If the solution is not unique, separate cases need to be considered. Taking the derivative of both sides of Equation (3.6), we obtain:
𝜕 F 𝜕 y y + 𝜕 F 𝜕 y y + 𝜕 F 𝜕 x = 0
Let
y = z
then Equation (3.9) can be rewritten as:
𝜕 F 𝜕 z z + 𝜕 F 𝜕 y z + 𝜕 F 𝜕 x = 0
If:
𝜕 F 𝜕 z 0
then from Equation (3.11), it follows that:
z = 1 𝜕 F 𝜕 z 𝜕 F 𝜕 y z + 𝜕 F 𝜕 x
For points that do not satisfy Inequality (3.12), from the previous continuity condition, the left side of Equation (3.13) equals the limit of the right side as x approaches the corresponding points. Let:
y = y z f x , y = z 1 𝜕 F 𝜕 z 𝜕 F 𝜕 y z + 𝜕 F 𝜕 x
Equations (3.10) and (3.13) form a system of differential equations that can be expressed in vector notation as:
d y d x = f ( x , y )
Let:
y 0 = y 0 y x 0
According to Equations (3.7), (3.10), and (3.14), we have
y x 0 = y 0
Hence, the initial value problem described by Equations (3.6) and (3.7) is transformed into:
d y d x = f ( x , y ) y x 0 = y 0
The Picard–Lindelöf theorem provides the conditions under which the initial value problem described by Equation (3.18) has a unique solution, and successive approximations of the solution can be obtained using the Picard iteration method11.

3.3. System of first order ODEs

Consider the system of ODEs:
F ( y , z , y , z , x ) = 0 G ( y , z , y , z , x ) = 0
subject to the initial conditions:
y x 0 = y 0 z x 0 = z 0
where F and G have continuous partial derivatives with respect to y , z , y , z , x and both y and z have continuous second derivatives.
From Equations (3.19) and (3.20), we obtain:
F ( y x 0 , z x 0 , y 0 , z 0 , x 0 ) = 0 G ( y x 0 , z x 0 , y 0 , z 0 , x 0 ) = 0
These equations can be solved to find:
y x 0 = g x 0 , y 0 z x 0 = h x 0 , y 0
Similar to the case discussed in last subsection, if the solution is not unique, separate cases need to be considered. Differentiating both sides of Equation (3.19), we get:
𝜕 F 𝜕 y y + 𝜕 F 𝜕 z z + 𝜕 F 𝜕 y y + 𝜕 F 𝜕 z z + 𝜕 F 𝜕 x = 0 𝜕 G 𝜕 y y + 𝜕 G 𝜕 z z + 𝜕 G 𝜕 y y + 𝜕 G 𝜕 z z + 𝜕 G 𝜕 x = 0
Let:
y = u z = v
Substituting these into Equation (3.23), we can rewrite it as:
𝜕 F 𝜕 u u + 𝜕 F 𝜕 v v + 𝜕 F 𝜕 y u + 𝜕 F 𝜕 z v + 𝜕 F 𝜕 x = 0 𝜕 G 𝜕 u u + 𝜕 G 𝜕 v v + 𝜕 G 𝜕 y u + 𝜕 G 𝜕 z v + 𝜕 G 𝜕 x = 0
If:
𝜕 F 𝜕 u 𝜕 F 𝜕 v 𝜕 G 𝜕 u 𝜕 G 𝜕 v 0
then Equation (3.25) can be solved to find:
u = 𝜕 F 𝜕 y u + 𝜕 F 𝜕 z v + 𝜕 F 𝜕 x 𝜕 F 𝜕 v 𝜕 G 𝜕 y u + 𝜕 G 𝜕 z v + 𝜕 G 𝜕 x 𝜕 G 𝜕 v 𝜕 F 𝜕 u 𝜕 F 𝜕 v 𝜕 G 𝜕 u 𝜕 G 𝜕 v v = 𝜕 F 𝜕 u 𝜕 F 𝜕 y u + 𝜕 F 𝜕 z v + 𝜕 F 𝜕 x 𝜕 G 𝜕 u 𝜕 G 𝜕 y u + 𝜕 G 𝜕 z v + 𝜕 G 𝜕 x 𝜕 F 𝜕 u 𝜕 F 𝜕 v 𝜕 G 𝜕 u 𝜕 G 𝜕 v
Similar to the situation in last subsection, for points that do not satisfy Inequality (3.26), the left side of Equation (3.27) equals the limit of the right side as x approaches these points, as guaranteed by the continuity conditions. This same principle applies to similar scenarios later on.
Define:
y = y z u v f x , y = u v 𝜕 F 𝜕 y u + 𝜕 F 𝜕 z v + 𝜕 F 𝜕 x 𝜕 F 𝜕 v 𝜕 G 𝜕 y u + 𝜕 G 𝜕 z v + 𝜕 G 𝜕 x 𝜕 G 𝜕 v 𝜕 F 𝜕 u 𝜕 F 𝜕 v 𝜕 G 𝜕 u 𝜕 G 𝜕 v 𝜕 F 𝜕 u 𝜕 F 𝜕 y u + 𝜕 F 𝜕 z v + 𝜕 F 𝜕 x 𝜕 G 𝜕 u 𝜕 G 𝜕 y u + 𝜕 G 𝜕 z v + 𝜕 G 𝜕 x 𝜕 F 𝜕 u 𝜕 F 𝜕 v 𝜕 G 𝜕 u 𝜕 G 𝜕 v y 0 = y 0 z 0 g x 0 , y 0 h x 0 , y 0
where F = F u , v , y , z , x and G = G u , v , y , z , x . Based on Equations (3.20), (3.22), (3.24), (3.27), and (3.28), the initial value problem described by Equations (3.19) and (3.20) can be transformed into the following initial value problem in vector form:
d y d x = f ( x , y ) y x 0 = y 0
The Picard–Lindelöf theorem provides the conditions under which the initial value problem in Equation (3.29) has a unique solution, and successive approximations of the solution can be obtained using the Picard iteration method11.

4. Existence and uniqueness of solutions

Definition 1 (Lipschitz condition)11 A vector function f ( x , y ) : D R n is said to satisfy a uniform Lipschitz condition with respect to y on the open set D R × R n provided there is a constant L such that:
f x , y 1 f ( x , y 2 ) L y 1 y 2
for all x , y 1 , ( x , y 2 ) D . The constant L is called a Lipschitz constant for f ( x , y ) with respect to y on D .
Theorem 1 (Picard-Lindelöf theorem)11 Assume that f is a continuous vector function on the rectangle:
D = x , y : x 0 x x 0 + a , y y 0 b
and that f ( x , y ) satisfies a uniform Lipschitz condition with respect to y on D . Let:
M = max x , y D f ( x , y ) , h = min a , b M
Then the initial value problem described by Equations (3.18) and (3.29) has a unique solution y x on x 0 , x 0 + h .
According to Picard iteration method, the sequence φ n x defined as:
φ 0 x = y 0 φ n + 1 x = y 0 + x 0 x f ξ , φ n ξ d ξ
converges uniformly to the unique solution of the initial value problem described by Equations (3.18) and (3.29) over the interval x 0 , x 0 + h , and:
y x φ n x M L n x x 0 n + 1 n + 1 !
where L is a Lipschitz constant. Hence, φ n x can be regarded as an approximate solution to the initial value problem, and the error can be estimated using equation (3.34).

5. Examples

5.1. First order ODE

Consider the following initial value problem for the ODE:
y 2 + y 2 = 1 y 0 = 0 0 x < 2 π
where y has continuous second derivative.
From Equation (4.1), we obtain:
y 0 2 + y 0 2 = 1
Combining Equations (4.1) and (4.2), we solve:
y 0 = ± 1
By differentiating both sides of the ODE in Equation (4.1), we get:
2 y y + 2 y y = 0
Using the continuity condition, we derive:
y x = lim t x y t y t y t = y x
Let:
y = z
The initial value problem described by Equation (4.1) is then transformed into:
d y d x = f ( x , y ) y x 0 = y 0 0 x < 2 π
where:
y = y z , f x , y = z y , y 0 = 0 ± 1
When:
y 0 = 0 1
According to Equation (3.33), we can calculate:
φ 0 x = y 0 = 0 1 φ 1 x = 0 1 + 0 x 1 0 d ξ = x 1 φ 2 x = 0 1 + 0 x 1 x d ξ = x 1 x 2 2 φ 3 x = 0 1 + 0 x 1 x 2 2 x d ξ = x x 3 3 ! 1 x 2 2 φ 4 x = 0 1 + 0 x 1 x 2 2 x + x 3 3 ! d ξ = x x 3 3 ! 1 x 2 2 + x 4 4 ! φ 5 x = x x 3 3 ! + x 5 5 ! 1 x 2 2 + x 4 4 !
It is not difficult to prove that:
lim n φ n x = n = 0 1 n x 2 n + 1 2 n + 1 ! n = 0 1 n x 2 n 2 n ! = sin x cos x
which implies:
y = sin x y = cos x
Similarly, when:
y 0 = 0 1
we obtain:
y = sin x y = cos x
In conclusion, the solution to the initial value problem described by Equation (4.1) is
y = ± sin x , 0 x < 2 π
This solution can be easily obtained or verified by other methods. The approximate analytical solutions of Equation (4.1) using the DQM and the exact solution are plotted in Figure 1. As seen in the figure, the approximate solution φ 5 x is already very close to the exact solution.

5.2. System of first order ODEs

Consider an object of mass 2 moving in a uniform circle under the gravitational pull of another object with mass 1 2 G located at the origin 0,0 , where G is the gravitational constant. The initial position of the orbiting object is 0,1 . The goal is to determine the position coordinates of this object at any time.
Let x , y represent the coordinates of the object at time t . The zero potential energy point is selected at infinity. Using the principles of energy conservation and angular momentum conservation, the following ODEs and initial conditions can be derived:
x 2 + y 2 1 x 2 + y 2 = 1 2 x y x y = 1 2 x 0 = 0 y 0 = 1 t 0
where x and y are functions of t . From Equation (4.16), we find:
x 0 = 0 y 0 = 1 x 0 = 1 2 y 0 = 0
Taking the derivatives of both sides of the ODEs in Equation (4.16), we obtain:
2 x x + 2 y y + x 2 + y 2 3 2 x x + y y = 0 x y x y = 0
Using the continuity condition, we derive:
x = x 2 x 2 + 2 y 2 + 1 3 16 y = y 2 x 2 + 2 y 2 + 1 3 16
Let:
x = z y = w
The initial value problem described by Equation (4.16) is then transformed into:
d y d t = f ( t , y ) y x 0 = y 0 t 0
where:
y = x y z w , f t , y = z w x 2 z 2 + 2 w 2 + 1 3 16 y 2 z 2 + 2 w 2 + 1 3 16 , y 0 = 0 1 1 2 0
Using Equation (3.33), we obtain:
φ 0 t = y 0 = 0 1 1 2 0 φ 1 t = 0 1 1 2 0 + 0 t 1 2 0 0 1 2 d t = 1 2 t 1 1 2 1 2 t φ 2 t = 1 2 t 1 1 4 t 2 1 2 + 1 4 2 t 2 + 3 32 2 t 4 + 1 64 2 t 6 + 1 1024 2 t 8 1 2 t 1 8 t 3 3 160 t 5 1 896 t 7 φ 3 t = 1 2 t + 1 12 2 t 3 + 3 160 2 t 5 + 1 448 2 t 7 + 1 9216 2 t 9 1 1 4 t 2 1 32 t 4 1 320 t 6 1 7168 t 8 1 2 + 1 4 2 t 2 + 1 64 2 t 6 + 81 10240 2 t 8 + 3663 1433600 2 t 10 + 1 2 t + 1 24 t 3 3 160 t 5 51 8960 t 7 31 28672 t 9
For convenience, terms in the sequence φ n that exceed the 20th power are excluded from the calculation in Equation (4.23); however, this approach may lead to different convergence speeds. From a physical standpoint, it is straightforward to show that the solution to the initial value problem described by Equation (4.16) is:
x = sin t 2 y = cos t 2 t 0
The approximate analytical solutions of Equation (4.16), obtained via the DQM, along with the exact solution, are plotted in Figure 2. As observed from the figure, within the interval 0,2 2 π , the approximate solution φ 20 x closely matches the exact solution.

6. Conclusion

The differential quasilinearization method (DQM) for obtaining approximate analytical solutions of ordinary differential equations (ODEs) was presented in this study. DQM is a widely applicable approach for solving initial value problems of ODEs, especially well-suited for addressing implicit or nonlinear ODEs. The effectiveness of this method was verified by solving specific examples of both single ODE and system of ODEs.

References

  1. Michoski, C., Milosavljević, M., Oliver, T. & Hatch, D. R. Solving differential equations using deep neural networks. Neurocomputing 399, 193–212 (2020). [CrossRef]
  2. Kojouharov, H. V, Roy, S., Gupta, M., Alalhareth, F. & Slezak, J. M. A second-order modified nonstandard theta method for one-dimensional autonomous differential equations. Appl. Math. Lett. 112, 106775 (2021). [CrossRef]
  3. Fekete, I., Conde, S. & Shadid, J. N. Embedded pairs for optimal explicit strong stability preserving Runge–Kutta methods. J. Comput. Appl. Math. 412, 114325 (2022). [CrossRef]
  4. Dwivedi, V. & Srinivasan, B. Physics Informed Extreme Learning Machine (PIELM)–A rapid method for the numerical solution of partial differential equations. Neurocomputing 391, 96–118 (2020).
  5. Ramos, J. I. On the Picard–Lindelof method for nonlinear second-order differential equations. Appl. Math. Comput. 203, 238–242 (2008). [CrossRef]
  6. Chen, F. & Liu, Q. Modified asymptotic Adomian decomposition method for solving Boussinesq equation of groundwater flow. Appl. Math. Mech. Ed. 35, 481–488 (2014). [CrossRef]
  7. Li, W. & Pang, Y. Application of Adomian decomposition method to nonlinear systems. Adv. Differ. Equations 2020, 67 (2020). [CrossRef]
  8. Shukla, A. K., Ramamohan, T. R. & Srinivas, S. Homotopy analysis method with a non-homogeneous term in the auxiliary linear operator. Commun. Nonlinear Sci. Numer. Simul. 17, 3776–3787 (2012). [CrossRef]
  9. Ghoreishi, M., Ismail, A. I. B. M., Alomari, A. K. & Sami Bataineh, A. The comparison between Homotopy Analysis Method and Optimal Homotopy Asymptotic Method for nonlinear age-structured population models. Commun. Nonlinear Sci. Numer. Simul. 17, 1163–1177 (2012). [CrossRef]
  10. He, J. & Wu, X. Variational iteration method: New development and applications. Comput. Math. with Appl. 54, 881–894 (2007). [CrossRef]
  11. Kelley, W. G. & Peterson, A. C. The Theory of Differential Equations: Classical and Qualitative. (Springer, 2010).
  12. Pandey, R. K. & Tomar, S. An effective scheme for solving a class of nonlinear doubly singular boundary value problems through quasilinearization approach. J. Comput. Appl. Math. 392, 113411 (2021). [CrossRef]
  13. Turkyilmazoglu, M. Accelerating the convergence of Adomian decomposition method (ADM). J. Comput. Sci. 31, 54–59 (2019). [CrossRef]
  14. Zeidan, D., Chau, C. K., Lu, T.-T. & Zheng, W.-Q. Mathematical studies of the solution of Burgers’ equations by Adomian decomposition method. Math. Methods Appl. Sci. 43, 2171–2188 (2020). [CrossRef]
  15. Duan, J.-S., Chaolu, T., Rach, R. & Lu, L. The Adomian decomposition method with convergence acceleration techniques for nonlinear fractional differential equations. Comput. Math. with Appl. 66, 728–736 (2013). [CrossRef]
  16. Aly, E. H., Ebaid, A. & Rach, R. Advances in the Adomian decomposition method for solving two-point nonlinear boundary value problems with Neumann boundary conditions. Comput. Math. with Appl. 63, 1056–1065 (2012).
  17. Naik, P. A., Zu, J. & Ghoreishi, M. Estimating the approximate analytical solution of HIV viral dynamic model by using homotopy analysis method. Chaos, Solitons & Fractals 131, 109500 (2020). [CrossRef]
  18. Rana, P., Shukla, N., Gupta, Y. & Pop, I. Homotopy analysis method for predicting multiple solutions in the channel flow with stability analysis. Commun. Nonlinear Sci. Numer. Simul. 66, 183–193 (2019). [CrossRef]
  19. Motsa, S. S., Sibanda, P. & Shateyi, S. A new spectral-homotopy analysis method for solving a nonlinear second order BVP. Commun. Nonlinear Sci. Numer. Simul. 15, 2293–2302 (2010). [CrossRef]
  20. Liao, S. On the relationship between the homotopy analysis method and Euler transform. Commun. Nonlinear Sci. Numer. Simul. 15, 1421–1431 (2010). [CrossRef]
  21. Abidi, F. & Omrani, K. The homotopy analysis method for solving the Fornberg–Whitham equation and comparison with Adomian’s decomposition method. Comput. Math. with Appl. 59, 2743–2750 (2010). [CrossRef]
  22. He, J. Variational iteration method—Some recent results and new interpretations. J. Comput. Appl. Math. 207, 3–17 (2007). [CrossRef]
  23. Anjum, N. & He, J. Laplace transform: Making the variational iteration method easier. Appl. Math. Lett. 92, 134–138 (2019).
  24. Torsu, P. On variational iterative methods for semilinear problems. Comput. Math. with Appl. 80, 1164–1175 (2020). [CrossRef]
  25. Nadeem, M., Li, F. & Ahmad, H. Modified Laplace variational iteration method for solving fourth-order parabolic partial differential equation with variable coefficients. Comput. Math. with Appl. 78, 2052–2062 (2019). [CrossRef]
Figure 1. Graphs of the approximate analytical solutions of Equation (4.1) obtained using the DQM compared with the exact solution.
Figure 1. Graphs of the approximate analytical solutions of Equation (4.1) obtained using the DQM compared with the exact solution.
Preprints 147341 g001
Figure 2. Graphs of the approximate analytical solutions of Equation (4.16) obtained using the DQM compared with the exact solution.
Figure 2. Graphs of the approximate analytical solutions of Equation (4.16) obtained using the DQM compared with the exact solution.
Preprints 147341 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated