high frequency screen spline

fast hybrid numerical-asymptotic boundary element methods for high frequency screen and aperture problems based on least-squares collocation | springerlink

fast hybrid numerical-asymptotic boundary element methods for high frequency screen and aperture problems based on least-squares collocation | springerlink

We present a hybrid numerical-asymptotic (HNA) boundary element method (BEM) for high frequency scattering by two-dimensional screens and apertures, whose computational cost to achieve any prescribed accuracy remains bounded with increasing frequency. Our method is a collocation implementation of the high order hp HNA approximation space of Hewett et al. (IMA J Numer Anal 35:16981728, 2015), where a Galerkin implementation was studied. An advantage of the current collocation scheme is that the one-dimensional highly oscillatory singular integrals appearing in the BEM matrix entries are significantly easier to evaluate than the two-dimensional integrals appearing in the Galerkin case, which leads to much faster computation times. Here we compute the required integrals at frequency-independent cost using the numerical method of steepest descent, which involves complex contour deformation. The change from Galerkin to collocation is nontrivial because naive collocation implementations based on square linear systems suffer from severe numerical instabilities associated with the numerical redundancy of the HNA basis, which produces highly ill-conditioned BEM matrices. In this paper we show how these instabilities can be removed by oversampling, and solving the resulting overdetermined collocation system in a weighted least-squares sense using a truncated singular value decomposition. On the basis of our numerical experiments, the amount of oversampling required to stabilise the method is modest (around 25% typically suffices), and independent of frequency. As an application of our method we present numerical results for high frequency scattering by prefractal approximations to the middle-third Cantor set.

The numerical simulation of high frequency (short wavelength) acoustic and electromagnetic scattering is challenging because of the need to capture the rapid oscillations in the wave field. Conventional Boundary Element Methods (BEMs), based on piecewise-polynomial basis functions, are computationally expensive because they require a fixed number of degrees of freedom (DOFs) per wavelength to achieve accurate solutions. This leads to very large (dense) BEM matrices which are costly to store and invert.

By contrast, hybrid numerical-asymptotic (HNA) BEMs use basis functions built from piecewise polynomials on coarse meshes multiplied by certain oscillatory functions, chosen based on partial knowledge of the high frequency asymptotic solution behaviour, as described by Geometrical Optics (GO) and the Geometrical Theory of Diffraction (GTD) [4, 41, 43]. The goal of the HNA approach (reviewed in [7]) is to achieve a fixed accuracy of approximation using a number of DOFs that is relatively small and frequency-independent, or only modestly (e.g. logarithmically) frequency-dependent, making it easier to store and invert the BEM matrix at high frequencies. HNA BEMs have been successfully developed for scattering by impenetrable convex [12, 13, 33], nonconvex [10, 31] and curvilinear [44] polygons, two-dimensional screens and apertures [32], smooth convex two-dimensional [2, 5, 19,20,21] and three-dimensional [23] scatterers, three-dimensional rectangular screens [30], penetrable convex polygons [27, 28] and, recently, for certain multi-obstacle scattering problems [6].

The use of oscillatory bases in HNA methods leads to an essentially frequency-independent BEM system size. However, the BEM matrix entries now involve highly oscillatory singular integrals, which need to be evaluated efficiently if one is to achieve the holy grail of frequency-independent computational cost. The majority of HNA methods in the literature to date (e.g. [10, 12, 13, 19,20,21, 23, 28, 30,31,32,33, 44]) are based on Galerkin discretisations. This leads to provably stable approximations and, in many cases (e.g. [10, 12, 20, 21, 31,32,33]), provides the framework for a rigorous, frequency-explicit convergence analysis. But Galerkin testing produces high-dimensional oscillatory integrals, which complicates the development of fast quadrature routines.

In order to develop HNA methods for more practically relevant problems than those tackled so far (in particular, three-dimensional problems), it would be highly advantageous to be able to use a collocation or Nystrm approach, for which numerical quadrature is simpler. Existing work in this direction includes for example the piecewise-constant collocation method for convex polygons in [1], the B-spline method for two-dimensional smooth convex scatterers in [25] (which is based on the earlier technical report [26]) and the closely related Nystrm method in [5]. However, these approaches are not generally supported by rigorous stability and convergence analysis, a practical consequence of which being that it is not at all obvious how to determine collocation/quadrature point distributions leading to stable approximations. This question is particularly delicate when working with non-smooth scatterers and HNA approximation spaces built with overlapping meshes to represent different high frequency solution components (as in [1]), which can exhibit a high degree of numerical redundancy and hence severe ill-conditioning (see e.g. the discussion in [1, 2] and 5 below).

Our aim in this paper is to demonstrate that accurate and efficient collocation-based HNA methods can indeed be developed for high frequency scattering problems involving non-smooth scatterers. The key new idea, apparently not employed previously in HNA methods, is to stabilise the method using oversampling, and solve the resulting overdetermined collocation linear system in a weighted least-squares sense.

We present our oversampled collocation approach in the context of a specific model problem, namely high frequency two-dimensional scattering by colinear screens and apertures. For an illustration of the problem see Fig.1. Such problems have numerous applications in acoustics, electromagnetics and water waves - for details we refer to the extensive reference list in [32], noting also the recent work on iterative Wiener-Hopf approaches to this problem in [49]. Our collocation HNA BEM for the screen problem uses the same high order HNA approximation space as the Galerkin method presented in [32], which is based on an hp refinement strategy, giving exponential accuracy with increasing polynomial degree (see Theorem 3 below). To stabilise our collocation method we take more collocation points than degrees of freedom, and solve the resulting rectangular (overdetermined) system in a weighted least-squares sense, using a truncated singular value decomposition. The one-dimensional oscillatory singular integrals appearing in the BEM matrix are evaluated accurately and efficiently using the numerical method of steepest descent [16, 17, 39], which is based on complex contour deformation, combined with generalised Gaussian quadrature [37], to handle the singularities without the need for mesh grading.

By a series of numerical experiments we demonstrate that our HNA collocation BEM can achieve a fixed solution accuracy in a computation time that is bounded (in fact, sometimes even decreasing) as the frequency tends to infinity. A key empirical observation (that we are currently unable to explain theoretically) is that the oversampling threshold, which governs the number of collocation points per DOF, does not need to increase with increasing frequency in order to maintain accuracy.

This paper has as its genesis the MSc dissertation [48] of the fourth author, in which a range of HNA approximation spaces and collocation point distribution strategies were compared for square collocation BEM matrices. Numerical instabilities encountered in [48] motivated the investigation of the oversampled collocation method presented here. The oscillatory integration in [48] was carried out using Filon quadrature (see e.g. [16, 38, 40]), combined with mesh grading to handle singularities (as in [18]).

We point out that our oversampled collocation approach differs slightly from that of the CHIEF method of [50]. The CHIEF method was developed for scattering by closed surfaces, and introduces extra collocation points in the interior of the scatterer to stabilise integral equation formulations that are ill-posed at certain values of k, corresponding to interior resonances. In the current paper we are introducing extra collocation points on the scatterer boundary, to stabilise an integral equation formulation for scattering by an open surface (in particular, the scatterer has no interior), which is well-posed for all k but suffers from ill-conditioning when approximated numerically.

The structure of the paper is as follows. In Sect.2 we define the scattering problem and its boundary integral equation reformulation. In 3 we describe the HNA approximation space of [32], and recall the corresponding best approximation error estimate. In 4 we present our regularised collocation approach, and in 5 we discuss the philosophy behind the truncated SVD approach used to solve the least-squares problem. In 6 we outline our numerical quadrature scheme for evaluating the oscillatory and singular integrals appearing in the BEM system. In 7 we present a series of numerical experiments demonstrating the performance of our method, including an application to scattering by the middle-third Cantor set.

Real part of the total field for scattering by \(N_\varGamma =4\) colinear sound-soft screens constituting the order \(j=2\) prefractal of the middle-third Cantor set (for more details see 7.3). The incident wave propagates from top left to bottom right

a bounded and relatively open subset of \(\varGamma _\infty :=\{ {\mathbf{x}}=(x_1,x_2)\in \mathbb {R}^2:x_2=0\}\) comprising \(N_{\varGamma }\ge 1\) colinear disjoint open intervals, with \(0=s_1

As the incident wave we take \(u^i({\mathbf{x}}) :=\mathrm{e}^{\mathrm{i}k {\mathbf{x}}\cdot {\mathbf{d}}}\), where \(k>0\) is the nondimensional wavenumber and \({\mathbf{d}}=(d_1,d_2)\in \mathbb {R}^2\) is a unit direction vector. The boundary value problem (BVP) to be solved is: Find \(u\in C^2\left( D\right) \cap C(\mathbb {R}^2)\cap W^1_{\mathrm {loc}}(D)\) s.t.

with the scattered field \(u^s:=u-u^i\) satisfying the Sommerfeld radiation condition (see, e.g., [7, Equation(2.9)]). Here \(W^1_\mathrm{loc}(D)\) is the usual space of locally square-integrable functions whose weak gradient is also locally square-integrable. Well-posedness of the BVP (2)(3) is proved, for example, in [52]. For an example solution see Fig.1.

By Babinets principle, the Dirichlet screen problem (2)(3) is equivalent to its complementary aperture problem, in which \(u^i\) impinges on an aperture \(\varGamma\) in a sound-hard (Neumann) screen occupying \(\varGamma _\infty \setminus \overline{\varGamma }\). (For a precise definition of the aperture BVP see [32, Definition1.2].) Explicitly, with \(U^\pm = \{{\mathbf{x}}\in \mathbb {R}^2:\pm x_2>0\}\), \(u^r({\mathbf{x}}) :=\mathrm{e}^{\mathrm{i}k {\mathbf{x}}\cdot {\mathbf{d}}'}\), \(d_2<0\), and \({\mathbf{d}}':=(d_1,-d_2)\), the screen and aperture solutions u and \(u'\) are connected by the formula [32, Equation(3.8)]

Theorem 1 below reformulates the BVP (2)(3) as an an integral equation for the normal derivative jump \([{\partial u}/{\partial {\mathbf{n}}}]:=({\partial u}/{\partial {\mathbf{n}}})^{+} - ({\partial u}/{\partial {\mathbf{n}}})^{-}\), where \(^\pm\) respectively denote the values on the top (\(+\)) and bottom () of \(\varGamma\). We first clarify some notation (for more detail see [32]). For \(s\in \mathbb {R}\), we denote by \(H^{s}(\mathbb {R})\) the usual Sobolev space on \(\mathbb {R}\), which, following [8, 32], we equip with the wavenumber-dependent norm

where \(\hat{u}\) denotes the Fourier transform of u. We set \(\widetilde{H}^{s}(\widetilde{\varGamma }):=\overline{C^\infty _0(\widetilde{\varGamma })}^{H^{s}(\mathbb {R})}\), equipped with the norm \(\Vert u\Vert _{H_k^{s}(\mathbb {R})}\) inherited from \(H^s(\mathbb {R})\), and \(H^s(\widetilde{\varGamma }):=\{u=U|_{\widetilde{\varGamma }}:U\in H^s(\mathbb {R})\}\), equipped with the norm \(\Vert u\Vert _{H^s_k(\widetilde{\varGamma })} = \inf \Vert U\Vert _{H^s_k(\mathbb {R})}\), where the infimum is taken over all \(U\in H^s(\mathbb {R})\) such that \(u=U|_{\widetilde{\varGamma }}\). We identify \(\varGamma \subset \varGamma _\infty\) with \(\widetilde{\varGamma }\subset \mathbb {R}\) in the natural way and define \(H^s(\varGamma _\infty ):=H^s(\mathbb {R})\), \(\widetilde{H}^s(\varGamma ):=\widetilde{H}^s(\widetilde{\varGamma })\), \(H^s(\varGamma ):=H^s(\widetilde{\varGamma })\) etc. We denote by \(\mathcal {S}:\widetilde{H}^{-1/2}(\varGamma )\rightarrow C^2(D)\cap W^1_{loc}(\mathbb {R}^2)\) and \(S:\widetilde{H}^{-1/2}(\varGamma )\rightarrow H^{1/2}(\varGamma )\) the standard single-layer potential and single-layer boundary integral operator, which for \(\phi \in L^2(\varGamma )\) have the integral representations

with \(f:=u^i|_\varGamma \in H^{1/2}(\varGamma )\). Conversely, suppose that \(v\in \widetilde{H}^{-1/2}(\varGamma )\) satisfies (7). Then u defined by (6) satisfies the BVP (2)(3), and \([{\partial u}/{\partial {\mathbf{n}}}]=v\).

where \(\varPsi :=-2{{\,\mathrm{sgn}\,}}(d_2){\partial u^i}/{\partial {\mathbf{n}}}\), and the functions \(v_j^\pm (s)\) are analytic in \(\mathfrak {R}{s}>0\), and, for each \(k_0>0\), there exists a constant \(C>0\), depending only on \(k_0\), such that

The known term \(\varPsi\) constitutes the physical optics approximation, describing the contribution to \(v=[{\partial u}/{\partial {\mathbf{n}}}]\) of the incident and specularly reflected waves. The bounds (9) imply that the unknown functions \(v_j^\pm\), which correspond to the amplitudes of the diffracted waves, are non-oscillatory as a function of s (see [32, Remark4.2]), which means they can be approximated much more efficiently than the full (oscillatory) solution v. Hence, rather than approximating v itself using piecewise polynomials on a fine (k-dependent) mesh (as in conventional BEMs), our HNA approximation space uses the decomposition(8), with \(\varPsi\) evaluated analytically and \(v_j^+\) and \(v_j^-\) replaced by piecewise polynomials on coarse (k-independent) meshes, graded to account for the singularities of \(v_j^+\) at \(s_{2j-1}\), and of \(v_j^-\) at \(s_{2j}\).

To describe the meshes we use, we first define a geometrically graded mesh \(\mathcal {M}\) on the interval [0,1] with l elements and grading parameter \(\sigma \in (0,1/2)\), and an associated space of piecewise polynomials \(\mathcal {P}_{{\mathbf{p}}}(\mathcal {M})\), by

and denote by \({\mathcal {P}}_{{\mathbf{p}}}(\mathcal {M}_j^\pm )\) the associated spaces of piecewise polynomials, defined analogously to \({\mathcal {P}}_{{\mathbf{p}}}(\mathcal {M})\) above. For an illustration of the resulting meshes see Fig.2. Finally, our HNA approximation space for approximating the difference \(v-\varPsi\) is

[32, Theorem5.1] Let \(k_0>0\) and \(l\ge cp\) for some constant \(c>0\). Then for every \(\delta >0\) there exists a constant \(C>0\), depending only on \(\delta\), \(\sigma\), \(N_{\varGamma }\)and c, and a constant \(\tau >0\), depending only on \(\delta\), \(\sigma\)and c, such that

Theorem 3 states that \(V_N\) can approximate \(v - \varPsi\) with an error which decreases exponentially as the maximum polynomial degree \(p\) tends to infinity, with a k-independent exponent. While the constant premultiplying the exponential factor grows with increasing k, it does so only modestly (like \(O(k^{1+\delta }\))), meaning that we can achieve any specified accuracy of approximation with \(p\) growing only logarithmically in k as \(k\rightarrow \infty\). This corresponds to the number of degrees of freedom N growing like \(\log ^2{k}\) as \(k\rightarrow \infty\). We shall see later that this logarithmic growth appears unnecessary in practice.

Let \(\{\varphi _n\}_{n=1}^N\) denote a basis for \(V_N\). In our implementation, each basis function \(\varphi _n\) consists of an exponential \(\mathrm{e}^{\mathrm{i}ks}\) or \(\mathrm{e}^{- \mathrm{i}ks}\) multiplied by an appropriately scaled and translated Legendre polynomial supported on a single element of one of the meshes \(\mathcal {M}_j^+\) or \(\mathcal {M}_j^-\) for some \(j\in \{1,\ldots ,N_\varGamma \}\), normalised so that \(\Vert \varphi _n\Vert _{L^2(\varGamma )}=1\).

To select an element \(\sum _{n=1}^N a_n\varphi _n \in V_N\) approximating \(\phi \in \tilde{H}^{-1/2}(\varGamma )\) we ask that the BIE (13), with \(\phi\) replaced by \(\sum _{n=1}^N a_n\varphi _n \in V_N\), should hold at a set of M collocation points

As we shall see, for reasons of stability it is useful to oversample, taking \(M> N\). The resulting system of linear equations for the coefficients \(a_n\), \(n=1,\ldots ,N\), is then overdetermined, so we seek a weighted least-squares solution. Explicitly, given a set of weights \(\mathcal {W}:=\{w_m\}_{m=1}^M\) corresponding to the collocation points \(\mathcal {C}_M\) (our choice of weights will be detailed below), we define \(A\in \mathbb {C}^{M\times N}\) and \({\mathbf{b}}\in \mathbb {C}^M\) by

where U and V are unitary matrices, \(V^*\) denotes the conjugate transpose of V, and \(\varSigma\) is an \(M\times N\) diagonal matrix. Denote by \(\sigma _n\) the nth entry of the diagonal of \(\varSigma\). To regularise, we introduce a small threshold \(\epsilon >0\), and define a modified diagonal matrix \(\varSigma ^{\epsilon }\), which has nth entry equal to \(\sigma _n\) if \(\sigma _n/ \max _{n'=1,\ldots ,N}\sigma _{n'}>\epsilon\), and zero otherwise. This provides a regularisation

where \((\varSigma ^\epsilon )^\dag\) is the (diagonal) pseudo-inverse of \(\varSigma ^{\epsilon }\) with entries \(\sigma ^\dag _n\), defined such that \(\sigma ^\dag _n=1/\sigma _n\) if \(\sigma _n/\max _{n'=1,\ldots ,N}\sigma _{n'}>\epsilon\), and \(\sigma ^\dag _n=0\) otherwise.

Regarding the placement of collocation points, while we have no theoretical results to guide us, as a result of extensive numerical experiments (including those carried out for square systems (\(M=N\)) in [48]), we arrived at the following prescription.

We first select an oversampling threshold \(C_\mathrm {OS}>1\). For each \(j\in \{1,\ldots ,N_\varGamma \}\) let \(s_{2j-1}=x^\pm _{j,0}

In (14) we choose weights \(\{w_m\}_{m=1}^M\) that are inversely proportional to the approximate local density of collocation points. Explicitly, suppose that [a,b] is a mesh element on which we have allocated \(M_{[a,b]}\) collocation points at the first-kind Chebyshev nodes

In this section we elaborate on the motivation for considering oversampling in combination with a truncated SVD in the solution method (17). It is known from the earlier analysis in [32] recall Theorem 3 in this paper that the best approximation in the HNA space \(V_N\) converges to \(v-\varPsi\) at an exponential rate. Briefly, oversampling and regularization ensure that a near-best approximation can also be computed numerically, in spite of potential ill-conditioning of the linear system(14).

One obvious source of ill-conditioning of the discretization is that the basis functions \(\varphi _N\) of the approximation space (11) may be close to being linearly dependent. The underlying reason is that, loosely speaking, in our discretization on each segment \(\varGamma _j\) we are combining two bases together. The impact of this potential redundancy depends on the values of the wavenumber k and the degree vector \({\mathbf{p}}\). Consider, for example, the case of small k: then the functions in the space \(\mathcal {P}_{{\mathbf{p}}}(\mathcal {M}_j^+)\mathrm{e}^{\mathrm{i}k s}\) are smooth and non-oscillatory on \(\varGamma _j\), and they may approximate the functions in the space \(\mathcal {P}_{{\mathbf{p}}}(\mathcal {M}_j^-)\mathrm{e}^{-\mathrm{i}k s}\). In the case of fixed k and a degree vector \({\mathbf{p}}\) associated with a large maximum degree \(p\) in (10), we can make a similar observation: the large degree polynomials in \(\mathcal {P}_{{\mathbf{p}}}(\mathcal {M}_j^\pm )\) may resolve the oscillations of \(\mathrm{e}^{\pm \mathrm{i}k s}\).

Lemma1 shows that the regularized pseudo-inverse (17) yields a solution with small residual, provided that a coefficient vector \({\mathbf{v}}\) exists with small residual and sufficiently small norm \(\Vert {\mathbf{v}}\Vert _2\). Note that the size of the coefficients is affected by the normalization of the basis functions, and recall from 3 that for our problem we have normalized the basis functions in \(L^2(\varGamma )\). For us, the existence of suitable vectors that result in a small right-hand side in (21) is suggested by Theorem2 and Theorem3. Thus, ill-conditioning of A due to redundancy in the discretization does not preclude the computation of highly accurate solutions.

However, the statement of Lemma1 is purely algebraic, and a small discrete Euclidean residual of (14) does not imply a small continuous residual of the integral equation (13) in \(H^{1/2}(\varGamma )\), from which we could infer (by the bounded invertibility of \(S:\tilde{H}^{-1/2}(\varGamma ) \rightarrow H^{1/2}(\varGamma )\)) a small approximation error of the continuous solution of (13) in \(\tilde{H}^{-1/2}(\varGamma )\).

In the simpler \(L^2\) setting one can appeal to the results on function approximations using redundant sets and a regularizing SVD solver with threshold \(\epsilon\) presented in [35, 36]. No guarantees can be given about the accuracy of interpolation, with errors possibly as large as \(1/\epsilon\) [35, Proposition4.6]. However, oversampling by a sufficient amount renders the approximation problem well-conditioned [36, 6]. In particular, if functions in a space G are sampled using a family of functionals \(\{ \ell _{M,m} \}_{m=1}^M\), with \(\ell _{M,m} : G \rightarrow \mathbb {C}\), then ideally the samples should be sufficiently rich to recover any function in G:

The choice to weight matrix entries with \(\sqrt{w}_m\) in (14), and the specific choice (20) of the weights, yields \(A'=B'=1\) for \(G \subset L^2(\varGamma )\), because the discrete sums are a Riemann sum for the \(L^2\) norm.

Generalising these results to the \(\tilde{H}^{-1/2}-H^{1/2}\) setting required for the rigorous analysis of our collocation BEM remains an open problem. Therefore, aside from the motivating observations presented in this section, our choice of collocation points, and of a linear oversampling rate \(M \approx C_{\mathrm {OS}} N\) with a proportionality constant \(C_{\mathrm {OS}}\) that is independent of the wavenumber, is based largely on numerical experiments, results of which we report in 7 (and see also [48], where a number of different collocation point allocations were compared for square systems without oversampling).

Our HNA approximation space uses oscillatory basis functions supported on large (k-independent) mesh elements. This means that assembling the discrete system (14) involves calculating highly oscillatory integrals, which may also be singular.

where \(k>0\) is the wavenumber, \(T>0\), \(\alpha \in [0,2]\), and \(F(\cdot ;k)\) is smooth and non-oscillatory on (0,T) but possibly logarithmically singular at, or near, the left endpoint \(t=0\). More explicitly, we shall assume that, for some \(t_0\in (-\infty ,0]\), \(F(\cdot ;k)\) is analytic in the cut-plane \(\mathbb {C}\setminus (-\infty ,t_0]\), with at most polynomial growth as \(|t|\rightarrow \infty\), and that there exists \(\tilde{c}>0\) and two functions \(F_0(t;k)\) and \(F_1(t;k)\), analytic in \(k|t-t_0|< \tilde{c}\), such that

where \(\varphi _n=\mathcal {L}_{q,[a,b]}(s)\mathrm{e}^{\pm \mathrm {i}k s }\) is the nth basis function, \(\mathcal {L}_{q,[a,b]}\) is the \(L^2\)-normalised Legendre polynomial of some degree q on \([a,b]={{\,\mathrm{supp}\,}}{\varphi _n}\), and \(c_m\) is the mth collocation point. To obtain a non-oscillatory prefactor we have extracted the phase of the Hankel function in (25) using [47, (10.2.5)]. To express (25) in the form (23) we then proceed by cases. In each case below, the fact that the resulting prefactor satisfies (24) follows from the small argument behaviour of the Hankel function (see e.g. [47, (10.4.3),(10.2.2),(10.8.2)]).

Each integral in the sum (26) can be expressed in the form (23) by essentially the same procedure described above. For example, when \(c_m\le s_{2j-1}\) we can adapt the approach of Case A to write \(\int _{s_{2j-1}}^{s_{2j}}\) in the form (23) with \(T=L_j=s_{2j}-s_{2j-1}\), \(\alpha =1+d_1\), and \(F(t;k) = \frac{{|d_2|}k}{2}\mathrm{e}^{\mathrm {i}k(s_{2j-1}-c_m)}\frac{ H^{(1)}_0(k(s_{2j-1}+t-c_m))}{\mathrm{e}^{\mathrm {i}k(s_{2j-1}+t-c_m)}}\), which has a logarithmic singularity at \(t_0 = c_m-s_{2j-1}\le 0\).

Depending on the values of the parameters \(k,T,\alpha\) and \(t_0\), the integrand (23) may be highly oscillatory and/or numerically singular. In this section we describe an algorithm which can compute (23) efficiently in all cases. Our approach combines the method of numerical steepest descent (see e.g. [16, 17, 39]), which uses complex contour deformation to convert oscillatory integrals into rapidly decaying ones, with generalised Gaussian quadrature (see e.g. [34, 37]), which handles the singularities. Our method is an extension of the scheme presented in [34, 4.3], which was shown to accurately calculate singular oscillatory integrals, to the case of near-singularities, where the integrand has a singularity outside of, but close to the integration range.

Our algorithm involves two parameters \(c_\mathrm{osc}>0\) and \(0c_\mathrm{osc}\) and non-oscillatory otherwise. The parameter \(c_\mathrm{sing}\) controls how close to the integration range the singularity at \(t_0\) needs to be for us to consider the integral singular (and apply generalised Gaussian quadrature). An important factor affecting the accuracy of numerical quadrature based on polynomial approximation for non-entire functions is the distance from the integration interval to the nearest singularity, measured relative to the length of the integration interval (e.g. [53, Theorem19.3]). So, in the non-oscillatory case we consider the integral singular when \(|t_0|/T

Our algorithm also depends on one further parameter, \(N_{\mathrm {Q}}\in \mathbb {N}\), which controls the number of quadrature points used. Explicitly, we use \(N_{\mathrm {Q}}\) points for each standard Gaussian quadrature rule and \(2N_{\mathrm {Q}}\) points for each generalised Gaussian quadrature rule (since generalised Gaussian quadrature requires twice as many points as standard Gaussian quadrature to achieve the same degree of exactness, see e.g. [37]).

We now present our algorithm, along with diagrams illustrating the contour deformations involved. Here the arrows indicate the direction along which each contour should be traversed, and the dashed line represents the original integration contour [0,T].

Case 1: \(k\alpha T/(2\pi ) \le c_\mathrm{osc}\) and \(|t_0|/T\ge c_\mathrm{sing}\) (non-oscillatory, non-singular). We evaluate (23) using standard GaussLegendre quadrature with \(N_{\mathrm {Q}}\) points.

Case 3: \(k\alpha T/(2\pi ) > c_\mathrm{osc}\) and \(|t_0|k\alpha /(2\pi c_\mathrm{osc}) \ge c_\mathrm{sing}\) (oscillatory, non-singular). We deform the integration contour into the complex plane, writing

justified by our assumption that \(k\alpha \ge 0\) and that \(F(\cdot ;k)\) grows only polynomially at infinity, meaning that the integrand of (23) decays exponentially as \(\mathrm{Im}\,t\rightarrow \infty\). Parametrizing the integrals by \(t=\mathrm{i}k\alpha \xi\) (respectively \(t=T+\mathrm{i}k\alpha \xi\)) gives

evaluating the first integral using generalised GaussLegendre quadrature with \(2N_{\mathrm {Q}}\) points, the second using generalised GaussLaguerre quadratureFootnote 1 with \(2N_{\mathrm {Q}}\) points, and the third using standard GaussLaguerre quadrature with \(N_{\mathrm {Q}}\) points. (Again, when \(t_0=0\) the first integral is not present.) Here our assumption that \(c_\mathrm{sing}\le 1\) ensures that our treatment of the third integral \(\int _{T}^{T+\mathrm{i}\infty }\) as non-singular (being evaluated by standard, rather than generalised GaussLaguerre quadrature) is consistent with our classification of the two integrals in Case 3 as non-singular, in the sense that if \(k\alpha T/(2\pi ) > c_\mathrm{osc}\) then

The total number of quadrature points (a proxy for computational cost) is \(N_{\mathrm {Q}}\) in Case 1, \(4N_{\mathrm {Q}}\) in Case 2, \(2N_{\mathrm {Q}}\) in Case 3, and \(5N_{\mathrm {Q}}\) in Case 4 (excluding special cases where \(t_0=0\)). Therefore, to minimise computational cost, we should take \(c_{\mathrm {osc}}\) as large as possible, and \(c_{\mathrm {sing}}\) as small as possible, so as to be in the cheapest Case 1 (non-oscillatory and non-singular) as often as possible. But obviously one cannot take \(c_{\mathrm {osc}}\) too large, or \(c_{\mathrm {sing}}\) too small, else oscillations (respectively, singularities) will not be adequately resolved. We investigate the choice of these parameters in more detail in the next section.

To validate the quadrature scheme described in 6.2 and tune the parameters \(c_{\mathrm {osc}}\), \(c_{\mathrm {sing}}\) and \(N_{\mathrm {Q}}\) we carried out a detailed set of experiments for the model integral

which is of the form (23) with \(\alpha =T=1\) and \(F(t;k)=H_0^{(1)}(k(t-t_0))\mathrm{e}^{-\mathrm {i}k t}\). For all possible combinations of \(c_{\mathrm {osc}}\in \{1,\ldots ,5\}\), \(c_{\mathrm {sing}}\in \{0,0.1,0.2,\ldots ,1\}\) and \(N_{\mathrm {Q}}\in \{5,10,15,20\}\), we computed the integral (28) using our algorithm at a large number of values of the parameters k and \(t_0\) in the range \(k\in (0,100]\), \(t_0\in [-1,0]\). For each set of parameters we measured the relative error compared to a reference value for (28), computed using a composite Gauss rule with a large number of quadrature points and mesh grading to handle (near) singularities. Based on numerical experiments we believe this reference solution to be accurate to at least 14 digits.

were found to produce a scheme which agrees with the reference solution to at least 12 digits, uniformly over the range of k and \(t_0\) tested, and these are the values we use in all our numerical results in 7. As evidence of this claim we show in Fig.3a a plot of the corresponding relative error over the studied range of k and \(t_0\). The \((k,t_0)\) plane is divided into the four regions in which Cases 1,2,3 and 4 apply. The maximum relative error over the whole plot is \(1.6\times 10^{-13}\).

For comparison we show in Fig.3b the corresponding plot for the choices \(c_{\mathrm {osc}}=4\), \(c_{\mathrm {sing}}=0.1\) and \(N_{\mathrm {Q}}=15\). The maximum relative error over the whole plot is now \(7.5\times 10^{-8}\), and one can clearly see the accuracy degrading as one moves to the right in region 1 (increasing oscillation) and downwards in regions 1 and 3 (approaching singularity).

Plots of \(\log _{10}\) of the relative error in computing (28) using the quadrature scheme of 6.2 for two different sets of quadrature parameters \(\{c_{\mathrm {osc}}=2, c_{\mathrm {sing}}=0.5, N_{\mathrm {Q}}=20\}\) and \(\{c_{\mathrm {osc}}=4, c_{\mathrm {sing}}=0.1, N_{\mathrm {Q}}=15\}\). In both plots the labels 1-4 indicate the regions of the \((k,t_0)\) plane in which Cases 1, 2, 3 and 4 apply

As we shall show in 7, the quadrature scheme in 6.2 with the parameter choices (29) is sufficiently efficient to achieve our goal of frequency independent computational cost for the HNA collocation BEM. However, we do not claim that the quadrature scheme is completely optimal. In particular, we note that the error in numerical steepest descent (without singularities) behaves like \(O((k\alpha )^{-2N_{\mathrm {Q}}-1})\) as \(k\alpha \rightarrow \infty\) (see e.g. [16, p90]), so in Cases 3 and 4 savings could be made by reducing \(N_{\mathrm {Q}}\) as \(k\alpha\) grows. But we reserve such further optimisation for future work.

In this section we demonstrate the computational efficiency of our HNA collocation BEM via a series of numerical examples. We also include the results of experiments exploring the influence of the parameters \(C_\mathrm{OS}\) and \(\epsilon\) in the truncated SVD solver. Finally we present an application of our method to the computation of high frequency scattering by high order prefractals of the middle-third Cantor set. The code used to produce the results in this section forms part of a larger Matlab-based HNA BEM software repository, which is downloadable from github.com/AndrewGibbs/HNABEMLAB.

Throughout this section we denote by \(v_p:=\phi _p+\varPsi\) our HNA BEM approximation to the solution v of the original BIE (7), where \(\phi _p\) denotes our approximation of the solution \(\phi\) of the BIE (13), the subscript p indicating the maximum polynomial degree used in our HNA approximation space \(V_N\), and \(\varPsi\) denotes the geometrical optics contribution. We denote by \(v^\pm _{j,p}\) the corresponding numerical approximations to the amplitudes \(v^\pm _j\) of the oscillatory functions \(\mathrm{e}^{\pm \mathrm {i}k s}\) in the decomposition (8). From the boundary solution we obtain an approximation

To illustrate the high frequency performance of our HNA method we consider first the case where the screen consists of a single unit interval \(\varGamma =(0,1)\times \{0\}\). Examples with multiple screens will be considered in 7.3. As the incident wave direction we take \({\mathbf{d}}=(1,-1)/\sqrt{2}\). Figure4a shows the resulting total field for wavenumber \(k=128\), and Fig.4b, c plot the boundary solution \(v_8\), along with the magnitudes \(|v^\pm _{1,8}|\) of the non-oscillatory amplitudes of the oscillatory factors \(\mathrm{e}^{\pm \mathrm {i}k s}\), for both \(k=128\) and \(k=512\).

HNA collocation BEM solution for \(\varGamma =(0,1)\times \{0\}\) andincident direction \((1,-1)/\sqrt{2}\), with \(p=8\). a Real part of the total field \(u^i + u^s_8\) for \(k=128\). b, c Real part of the boundary solution \(v_8\), along with the amplitudes \(|v^\pm _{1,8}|\), for \(k=128\) and 512 respectively

in the BEM solution for a range of values of p and k, using \(p=12\) as reference solution. For easier comparison with the best approximation theory (Theorem 3) it would be preferable to compute the \(\tilde{H}^{-1/2}(\varGamma )\) norm (which is possible via an appropriate single layer boundary integral representation, see e.g. [11, 7]), but here we choose the \(L^1(\varGamma )\) norm because it is easier to evaluate, and also because convergence of the boundary solution in \(L^1(\varGamma )\) implies, via a straightforward integral estimation, the convergence in supremum norm of the domain solution (on compact subsets of D) and the far-field pattern (cf. (35) below), which we study in Fig.5c, d. We compute our \(L^1(\varGamma )\) norms by composite Gaussian quadrature with 20 Gauss points per element on a graded mesh, built by intersecting the overlapping meshes used to approximate \(v_{12}\), with additional subdivision of the larger elements to ensure that no element is longer than one wavelength \(2\pi /k\).

Figure5a demonstrates that our approximation of the boundary solution converges exponentially (in \(L^1(\varGamma )\)) as we increase the polynomial degree p. Moreover, for fixed p the \(L^1\) error is not only bounded but actually decreases with increasing k.

Convergence with respect to increasing maximum polynomial degree p, for various wavenumbers k. a \(L^1\) error on \(\varGamma\) against p, b \(L^1\) error on \(\varGamma\) against CPU time, c, d \(L^\infty\) error in the near and far field against p

In Fig.5b we present the same results, plotted against the fourth root of the total CPU time for the BEM assembly and solve. The rationale behind this comparison is as follows. Our approximation space \(V_N\) for this problem has dimension \(N=O(p^2)\) (maximal polynomial degree p over two overlapping meshes of \(2(p+1)\) elements each). And with a fixed oversampling parameter \(C_{\mathrm {OS}}=1.25\), the number of collocation points \(M\approx C_{\mathrm {OS}}N\) also scales like \(O(p^2)\). Hence the number of elements in the BEM matrix scales like \(MN=O(p^4)\) with increasing p. The linear systems we use are moderate in size (typically \(N\le 1000\)) and are rapidly solved using our truncated SVD approach. As a result, the majority of the CPU time is spent assembling the discrete system, which means that total CPU time scales like \(O(p^4)\). Therefore, to scrutinize the performance of our numerical quadrature scheme for evaluating the BEM matrix entries by a direct comparison with Fig.5a, we should plot the boundary error against the fourth root of the CPU time. The results in Fig.5b demonstrate that our quadrature scheme (based on numerical steepest descent) is both accurate and efficient, and results in a fully discrete BEM for which the error at a fixed computational cost is bounded (in fact, decreases) with increasing k. The timings in Fig.5b are for a desktop PC with an Intel i7-4790 3.60GHz 4-core CPU, with matrix assembly done in parallel using a Matlab parfor loop. They show that highly accurate solutions can be obtained for essentially arbitrarily high frequencies in just a few seconds on a standard machine.

In Fig.6 we show analogous results for the case of grazing incidence, with \({\mathbf{d}}=(0,-1)\). A plot of the corresponding total field is shown in Fig.6a and plots of the near- and far-field errors are shown in Fig.6b, c. As before we see exponential convergence with increasing p, and no significant increase in error for increasing k. Relative errors are an order of magnitude larger than in the non-grazing example, because in this case there is no geometrical optics component (\(\varPsi =0\)), so the scattered fields are purely diffractive and hence smaller in maximum magnitude.

Comparing Figs.5d and 6c, we observe that in the far field the accuracy noticeably improves as k increases in the case of non-grazing incidence, but not in the case of grazing incidence. This is due to the fact that the far-field pattern \(u^\infty\) has peaks of magnitude proportional to \(|d_2|k\) at the values of \(\theta\) corresponding to the reflected and shadow directions (cf. Fig.8a), associated with the contribution of the geometrical optics term \(\varPsi\).Footnote 2 Hence the denominator in the relative \(L^\infty\) error will be O(k) for non-grazing incidence, while the numerator is not expected to be comparably large because the contribution from \(\varPsi\) is computed exactly (up to quadrature error), and hence does not contribute to the discretization error.

Our choices of oversampling parameter \(C_{\mathrm {OS}}=1.25\) and SVD truncation parameter \(\epsilon =10^{-8}\) in the previous section were based on the results of extensive numerical testing. In this section we report a small subset of these results, to illustrate the effect of changing these parameters. All results in this section relate to the case \(\varGamma =(0,1)\times \{0\}\) with \({\mathbf{d}}=(1,-1)/\sqrt{2}\), as in Figs.4 and 5.

Dependence of solution accuracy on the oversampling parameter \(C_\mathrm {OS}\) and the truncation parameter \(\epsilon\) for moderate (\(k=256\), left-hand plots) and large (\(k=65536\), right-hand plots) wavenumbers and varying polynomial degree p. The values of \(C_\mathrm {OS}\) used are \(\{1,1.05,1.1,\ldots ,2\}\), but the resulting ratio M/N between the number of collocation points and the number of unknowns is in general slightly larger than \(C_\mathrm {OS}\) (recall (19)). The data points corresponding to \(C_{\mathrm {OS}}=1.25\) are shown with large markers

In Fig.7 we present plots showing the dependence of the \(L^1(\varGamma )\) solution error, for \(p\in \{2,\ldots ,8\}\), on the oversampling ratio M/N, which is a function of the oversampling parameter \(C_{\mathrm {OS}}\). The left-hand plots correspond to wavenumber \(k=256\) and the right-hand plots to \(k=65536\), while the top, middle and bottom plots correspond to the SVD truncation thresholds \(\epsilon =10^{-4}\), \(\epsilon =10^{-8}\) and \(\epsilon =10^{-12}\), respectively. In each case tests were run for \(C_{\mathrm {OS}}\in \{1,1.05,1.1,\ldots ,2\}\); note that for the non-integer values of \(C_{\mathrm {OS}}\) the resulting values of M/N are in general larger than \(C_{\mathrm {OS}}\) (recall (19)). As the reference solution we use the solution with \(p=12\) and \(C_{\mathrm {OS}}=2\).

In interpreting these plots we focus first on the results for \(C_{\mathrm {OS}}=1\) (\(M/N=1\)), which corresponds to a square system (no oversampling). The need for oversampling is clear from the fact that in all six plots the errors for \(C_{\mathrm {OS}}=M/N=1\) are not monotonically decreasing with increasing p. That is, increasing the dimension of the approximation space \(V_N\) does not necessarily lead to a more accurate solution. (Further results for this square case can be found in [48].)

By contrast, at least in plots (c)(f) of Fig.7, for all fixed values of \(C_{\mathrm {OS}}>1\) considered, the error is monotonically decreasing with increasing p. Since computational cost is proportional to M, it is desirable to take \(C_{\mathrm {OS}}\) as small as possible while maintaining this monotonicity. But for \(C_{\mathrm {OS}}\) close to 1 there are still visible instabilities. On the basis of our experiments it seems that \(C_{\mathrm {OS}}\ge 1.25\) is sufficient to ensure stability for all the problems we considered. (The data points corresponding to the choice \(C_{\mathrm {OS}}=1.25\) are shown with larger markers to make this transition more clear.) Significantly, and perhaps surprisingly, the amount of oversampling required to stabilise the method appears to be independent of the wavenumber k, as one sees, for example, by comparing Figs.7c (moderate k) and 7d (larger k), and recalling the convergence plots in Fig.5a, b, which reach wavenumber \(k=262,144\) with no discernable loss of stability.

To interpret the dependence of the results on the SVD truncation parameter \(\epsilon\), it is instructive to recall Lemma 1, which, while not directly providing information about the error in our boundary solution (as explained in 5), still provides some intuition. On the one hand, since the argument of the infimum on the right-hand side of (21) is a linearly increasing function of \(\epsilon\), we expect that if \(\epsilon\) is taken to be too large, the discrete residual for the SVD solution may be large, leading to a loss in accuracy of the boundary solution. This effect can be clearly observed by comparing Fig.7c, d (\(\epsilon =10^{-8}\)), where we see clear exponential convergence as p increases, with Fig.7a, b (\(\epsilon =10^{-4}\)), where errors appear to stagnate around \(p=5\). On the other hand, looking again at (21), we expect that if \(\epsilon\) is taken to be too small, the SVD solver may select a solution with a large coefficient norm, in which case our numerical solution may suffer from spurious oscillations between the collocation points, or other numerical instabilities that might increase the \(L^1(\varGamma )\) error. This degradation in accuracy for small \(\epsilon\) can be observed by comparing Fig.7c, d (\(\epsilon =10^{-8}\)), with Fig.7e, f (\(\epsilon =10^{-12}\)). Based on extensive testing we found that the choice \(\epsilon =10^{-8}\) gave satisfactory performance for all the examples we considered.

In this section we present an application of our HNA BEM to high frequency scattering by the middle-third Cantor set. The mathematical analysis of acoustic scattering by fractal screens, of which the Cantor set is an example, was developed recently in [9] and [11], with the latter providing a rigorous convergence theory for an approximation strategy based on conventional Galerkin BEM using piecewise constant basis functions, along with numerical results for low frequency scattering in both two and three dimensions by a range of fractal screens. For two-dimensional scattering by the Cantor set, the HNA BEM presented in the current paper allows us to investigate the high frequency regime, and to our knowledge this is the first time that high frequency BEM simulations of scattering by fractal screens have been presented in the literature. Fractals, which one might define loosely as sets possessing geometric detail on every lengthscale, arise in numerous scattering applications, including the human lung and other dendritic structures in medical science [42], snowflakes, ice crystals and other atmospheric particles in climate science [51], fractal antennas for electromagnetic wave transmission/reception [24], fractal piezoelectric ultrasound transducers [45], and fractal aperture problems in laser physics [14]; for further references see [11]. The case of high frequency scattering by fractals is particularly intriguing, because even though the diameter of the scatterer may be large compared to the wavelength, the scatterer always possesses sub-wavelength structure, so standard ray-based high frequency approximations do not apply. In his 1979 paper on random phase screens [3], Berry describes this case as a new regime in wave physics in which the scattered waves adopt unfamiliar forms that should be studied in their own right.

In practical applications and numerical simulations one generally works with prefractal approximations in which the fractal structure is truncated at some level. The standard prefractals for the middle-third Cantor set are defined as follows. Letting \(\varGamma ^{(0)}=(0,1)\times \{0\}\), for \(j\in \mathbb {N}\) we construct the jth prefractal \(\varGamma ^{(j)}\) by removing the (closed) middle third from each disjoint interval making up the previous prefractal \(\varGamma ^{(j-1)}\), so that e.g. \(\varGamma ^{(1)}=((0,1/3)\cup (2/3,1))\times \{0\}\). That the BIE solutions for sound-soft scattering on the prefractals converge to a well-defined non-zero limiting BIE solution on the underlying fractal was proved in [9, Example8.2] (and see also [11, Prop.6.1]).

A plot of the total field for scattering by \(\varGamma ^{(2)}\) with incident direction \({\mathbf{d}}=(1,-1)/\sqrt{2}\), computed using our collocation HNA BEM, was presented in Fig.1. In Fig.8 we present radial plots of

for the first six prefractals \(\varGamma ^{(0)}, \ldots \varGamma ^{(5)}\), with \(k=1024\) and \({\mathbf{d}}=(1,-1)/\sqrt{2}\). The log scales are truncated at \(-1\) to leave space in the centre of the plots for the corresponding prefractals to be plotted. Corresponding \(L^1(\varGamma )\) convergence plots are presented in Fig.9, where, for each \(j=0, \ldots , 5\) we take as reference solution the corresponding solution with \(p=10\). The fact that the absolute \(L^1(\varGamma )\) error in our boundary solution with \(p=8\) is smaller than \(10^{-1}\) for all \(j=0, \ldots , 5\) implies that our far-field plots in Fig.8 are also accurate to \(10^{-1}\) absolute error, since

We note that at \(k=1024\) the lowest order prefractal \(\varGamma ^{(0)}\) is approximately 160 wavelengths long, and each component of the highest order prefractal \(\varGamma ^{(5)}\) is approximately one wavelength long. While the far-field patterns for \(\varGamma ^{(4)}\) and \(\varGamma ^{(5)}\) are similar, they are clearly not identical, which is to be expected since in refining \(\varGamma ^{(4)}\) to \(\varGamma ^{(5)}\) we are making wavelength-scale changes to the scatterer geometry. Hence for \(\varGamma ^{(5)}\) we are still in the pre-asymptotic regime with regard to convergence to the solution for scattering by the limiting fractal screen. (The asymptotic regime is considered in [11] using a conventional BEM approach.) We emphasize that our HNA method can handle much larger wavenumbers than \(k=1024\), but at very large wavenumbers the far-field plots are so oscillatory they are difficult to interpret, so we do not present them here.

In related literature it is common for generalised GaussLaguerre to refer to a quadrature rule for integrals of the form \(\int _0^\infty x^{-\nu }f(x)\mathrm{e}^{-x}\mathrm{d}{x}\), for \(\nu >-1\) where f is well-approximated by a polynomial, but we do not use this definition here. In this paper, we use generalised GaussLaguerre to mean a quadrature rule which can efficiently evaluate \(\int _0^\infty [\log (x)f(x)+g(x)]\mathrm{e}^{-x}\mathrm{d}{x}\), where f and g are well-approximated by polynomials.

Bruno, O., Geuzaine, C., Monro Jr., J., Reitich, F.: Prescribed error tolerances within fixed computational times for scattering problems of arbitrarily high frequency: the convex case. Philos. Trans. R. Soc. A 362(1816), 629645 (2004)

Chandler-Wilde, S.N., Langdon, S., Mokgolele, M.: A high frequency boundary element method for scattering by convex polygons with impedance boundary conditions. Commun. Comput. Phys. 11(2), 575593 (2012)

Christian, J.M., McDonald, G.S., Kotsampaseris, A., Huang, J.: Fresnel diffraction patterns from fractal apertures: boundary conditions and circulation, pentaflakes and islands. In: Proceedings of EOSAM 2016, Berlin, Germany. European Optical Society (2016)

Giladi, E., Keller, J.B.: An asymptotically derived boundary element method for the Helmholtz equation. In: Proceedings of the 20th Annual Review of Progress in Applied Computational Electromagnetics, pp. 5274 (2004)

Huybrechs, D., Olver, S.: Highly oscillatory quadrature. In: Engquist, B., Fokas, A., Hairer, E., Iserles, A. (eds.) Highly Oscillatory Problems. London Mathematical Society Lecture Note Series, pp. 2550. CUP, Cambridge (2009)

Priddin, M.J., Kisil, A.V., Ayton, L.J.: Applying an iterative method numerically to solve n$\times $ n matrix WienerHopf equations with exponential factors. Phil. Trans. R. Soc. A 378(2162), 20190241 (2019)

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Gibbs, A., Hewett, D.P., Huybrechs, D. et al. Fast hybrid numerical-asymptotic boundary element methods for high frequency screen and aperture problems based on least-squares collocation. SN Partial Differ. Equ. Appl. 1, 21 (2020). https://doi.org/10.1007/s42985-020-00013-3

research on high-frequency combination coding-based ssvep-bcis and its signal processing algorithms

research on high-frequency combination coding-based ssvep-bcis and its signal processing algorithms

Feng Zhang, Chengcheng Han, Lili Li, Xin Zhang, Jun Xie, Yeping Li, "Research on High-Frequency Combination Coding-Based SSVEP-BCIs and Its Signal Processing Algorithms", Shock and Vibration, vol. 2015, Article ID 194230, 12 pages, 2015. https://doi.org/10.1155/2015/194230

This study presents a new steady-state visual evoked potential (SSVEP) paradigm for brain computer interface (BCI) systems. The new paradigm is High-Frequency Combination Coding-Based SSVEP (HFCC-SSVEP). The goal of this study is to increase the number of targets using fewer stimulation frequencies, with diminishing subjects fatigue and reducing the risk of photosensitive epileptic seizures. This paper investigated the HFCC-SSVEP high-frequency response (beyond 25Hz) for 3 frequencies (25Hz, 33.33Hz, and 40Hz). HFCC-SSVEP produces with high stimulation frequencies through Time Series Combination Code. Furthermore, The Improved Hilbert-Huang Transform (IHHT) is adopted to extract time-frequency feature of the proposed SSVEP response. Lastly, the differentiation combination (DC) method is proposed to select the combination coding sequence in order to increase the recognition rate; as a result, IHHT algorithm and DC method for the proposed SSVEP paradigm in this study increase recognition efficiency so as to improve ITR and increase the stability of the BCI system. Furthermore, SSVEPs evoked by high-frequency stimuli (beyond 25Hz) minimally diminish subjects fatigue and prevent safety hazards linked to photo-induced epileptic seizures. This study tests five subjects in order to verify the feasibility of the proposed method.

A brain computer interface (BCI) is a direct communication pathway between a human or animal brain and an external device. Nowadays, noninvasive scalp electroencephalogram (EEG) measurements have become a popular solution in BCI research. The most commonly used signals in EEG-based BCI systems are event-related synchronization of mu and beta bands, event-related potentials, and steady-state visual evoked potential (SSVEP) [13].

The SSVEP is usually elicited by flickering stimulation frequency higher than 6Hz, while the frequency for inducing FVEP (Flash Visual Evoked Potential) should be lower than 2Hz [4]. The most common measure to assess the performance of a BCI system is Shannons information transfer rate (ITR) [5]. Steady-state visual evoked potential (SSVEP) has been regarded as an efficient approach to design BCI with high information transfer rate (ITR). SSVEP-based systems lead to transfer rates of 100bits/min and beyond [2, 6], compared to about 1025bits/min of other BCI systems [5]. Furthermore, SSVEP-based BCI systems have the advantages of short responding time, minimum training, and high ITR [4, 7, 8].

The visual stimulator plays an important role in an SSVEP BCI. Several visual stimulators have been used for evoking SSVEP, such as a cathode ray tube (CRT) monitor [9], liquid crystal display (LCD) monitor, and light-emitting diode (LED) array [10]. Considering stimulation parameters such as size, color, and position, LED arrays are not flexible and increase the overall system cost; presenting flickers on a computer monitor is more flexible than using stand-alone lights/LEDs [9].

Current SSVEP-based BCI system utilizes single frequency to encode each target. Hence, a large number of targets require a large number of frequencies. Increasing the number of targets then decreases the frequency resolution which in turn makes classification more difficult. This is especially problematic on computer screens. Recently, some researchers began to study how to increase the number of targets with fewer frequencies, such as stimuli flickered at the same frequency and differed only in relative phase [11], right-and-left field stimulation with two frequencies [12], and using dual frequency stimulation [4]. However, SSVEPs are usually evoked by low-mid frequencies in the 730Hz, but the low-mid frequency band has some disadvantages [8]. First, the low frequency band covers the alpha band (813Hz) which can cause a considerable amount of false positives. Second, subjective evaluations showed that frequencies between 5 and 25Hz are more annoying than higher ones; visual fatigue would easily occur. Third, flash and pattern reversal stimuli can provoke epileptic seizures especially in the 1525Hz range [13]. All of these disadvantages can be avoided by using the high frequency band [1416].

This study proposes High-Frequency Combination Coding-Based flickers for evoking SSVEP (HFCC-SSVEP). The HFCC-SSVEP is essentially based on frequency modulation. Each flicker comprises different high flickering frequencies through High-Frequency Combination Code. The High-Frequency Combination Coding-Based SSVEP (beyond 25Hz) is induced by CRT (Cathode Ray Tube) and LED. This system uses only one Oz-A1 EEG channel for SSVEP recording. The proposed new paradigm produces with high stimulation frequencies through Time Series Combination Code. Furthermore, conventional spectral methods are not always suitable for detecting high-frequency SSVEPs. In this paper, an improved HHT- (Hilbert-Huang Transform-) based high-frequency-modulated SSVEP feature extraction method is proposed to extract time-frequency feature of the proposed SSVEP response. Furthermore, SSVEPs evoked by high-frequency stimuli (beyond 25Hz) minimally diminish subjects fatigue and prevent safety hazards linked to photo-induced epileptic seizures.

Three healthy volunteers (three males randomly selected from the students in the research institution), aged from 21 to 25 years old, participated in this study from Xian Jiaotong University. They were seated in a comfortable armchair in a dimly illuminated EEG signals testing lab, which is quiet without any distractions. All participants were 50cm away from the stimulation unit (CRT display OR LED). EEG signals were measured from three EEG electrodes (g.USBamp, g.tec Guger Technologies, Austria) placed at Oz-A1 (Oz-unilateral earlobe) and Fpz (ground) in compliance with the international EEG 1020 system. The unilateral (left or right) earlobe was used as the recording reference, and all electrode impedances were kept below 5kOhm. The experimental data sampling frequency is 1200Hz.

Figure 1 shows the block diagram of a HFCC-SSVEP for BCI. HFCC-SSVEPs were induced by 27 stimulus series; these stimuli were displayed by cycle flickers at the center of CRT. The 27 stimulus series were generated from 3 fundamental frequency elements through Time Series Combination Code. The used fundamental frequency elements were only three high frequencies, that is, 25, 33.33, and 40Hz; the three frequencies were labeled as 1, 2, and 3. The selection of fundamental frequency elements depended on the screen refresh rate and sampling rate. Each fundamental frequency element was generated from 20 pulses. The data lengths of 27 stimulus series were variably between 1.5s and 2.4s. The subjects gazed each stimulus sequence and the stimulus sequence was displayed by cycle flickers at the center of CRT.

Figure 2 shows EEG signals of SSVEP response presented through LED with 25Hz, 30Hz, 40Hz, and 75Hz. The length of EEG data is 8s. The frequency spectrum of EEG signals was determined using the fast Fourier transform (FFT) technique in MATLAB software. Figure 3 shows the FFT spectrum of SSVEP EEG signals of LED. The spectrum frequency detection is usually the appropriate for frequency analysis of SSVEPs in BCI applications. Figure 4 shows FFTs amplitude of EEG signals for LED. Figure 3 shows that SSVEP response of 25Hz, 30Hz, and 40Hz is sensitive, but the 75Hz spectral line cannot be distinguished in the FFT spectrum from Figure 3. Figure 4 shows the fitted curve of FFT spectrum energy.

In this study, each Oz-A1 EEG signal was segmented into ten segments. Figure 10 shows that SCCBH-SSVEP EEG signals 6 of 27 selections. The length of EEG data is unequal between 1.5s and 2.4s. The frequency spectrum of averaged EEG signals was determined using the fast Fourier transform (FFT) technique in MATLAB software. The spectrum frequency detection is usually the appropriate for frequency analysis of SSVEPs in BCI applications; but the FFT spectrum cannot contain temporal information. The 27 selections represented by three high frequencies elements cannot be distinguished from each other in the FFT spectrum.

Spectrum analysis is usually used to extract the frequency information in traditional evoked SSVEPs responses. The underlying idea is always the same: a blinking or moving visual stimulus at a constant frequency (the stimulus frequency) elicits a response in the brain at the same frequency and its even harmonics (SSVEP frequency is equal to stimulus frequency plus its even harmonics). Note that this behavior denotes a nonlinearity of the visual system [5].

The evoked SSVEP response of High-Frequency Combination Coding-Based stimulus not only contains frequency information but also contains temporal information. The frequency information consists of three frequencies through Time Series Combination Code; the length of evoked SSVEP response data varies between 1.5s and 2.4s (see Figure 2); as a result, the evoked SSVEP responses are nonstationary and nonlinear. The traditional spectrum detection method is not suited for analyzing the time series combination coding-based high-frequency evoked SSVEP response. More sophisticated nonstationary and nonlinear signal processing techniques have recently been used to analyze the evoked SSVEP response [1719].

Hilbert-Huang transforms [20], consisting of empirical mode decomposition (EMD) and Hilbert spectral analysis, is a newly developed adaptive data analysis method. The HHT is designed specifically for analyzing nonlinear and nonstationary data. The key part of HHT is EMD with which any complicated data set can be decomposed into a finite and often small number of intrinsic mode functions (IMFs). The instantaneous frequency defined using the Hilbert transform denotes the physical meaning of local phase change better for IMFs than for any other non-IMF time series. As the decomposition is based on the local characteristics of the data, it has been proved quite versatile in a broad range of applications for extracting signals from data generated in noisy nonlinear and nonstationary processes [21]. It has been widely used to analyze EEG signals [2226]. In order to extract the time-frequency characteristics of high-frequency time series combination coding-based SSVEPs, we proposed IHHT-based high-frequency time-modulated SSVEP method.

Figure 9 shows the steps of HHT-based high-frequency time-modulated SSVEP feature extraction, a local spectrum extreme target identification algorithm and differentiation combination method. The extraction method consists of synchronous averaging, band pass filtering, EMD, selection of IMF, instantaneous frequency, and Hilbert spectrum. In order to ensure that the time-frequency characteristics of high-frequency time series combination coding-based SSVEPs are efficiently extracted, we must choose and optimize the key algorithm. The specific method is as follows. The end effect and stopping criterion are the core problems of empirical mode decomposition (EMD); two methods are selected and optimized in order to overcome the shortage of end effect and stopping criterion of empirical mode decomposition (EMD) in the processing of variable frequency EEG data.

(i) Optimization Selection of the EMD Endpoint Prediction. Upper and lower envelope averaging is one of the cores of the EMD algorithm; the upper and lower envelopes are attained through the extreme value point of the spline curve fitting. The EMD end effect is how to find the proper fitting curve between the last extreme value point and endpoint in the curve fitting. The boundary prediction method and traditional method are compared in order to optimize the problem for the end effect in the high-frequency time-modulated SSVEP feature extraction.

Figure 5 shows the boundary prediction method. The boundary prediction method is to use the 1 order approximate point as the endpoint. Through the theoretical and real data analysis [27], it is not only to conform to the conditions of the cubic spline curve fitting but also to ensure that the fitting curve fluctuation is minimal. In Figure 6, E-Natural curve is the upper envelope curve fitting based on cubic spline curve through the boundary prediction method, Natural curve is the upper envelope curve fitting based on cubic spline curve through the traditional method. Figure 6 shows that the fluctuation of the fitting curve based on boundary prediction in the end is minimal. So the boundary predictions are used to overcome the shortage of end effect of empirical mode decomposition (EMD) in the processing of variable frequency EEG data.

(ii) The Parameter Optimization of EMD Stopping Criterion. The EMD algorithm is essentially a process of screening; the screening mathematical formula is as follows:In formula, is the target signal, is the IMF that is decomposed through the EMD algorithm, is the number of IMF, , is the average of the upper and lower envelope curve in the process of screening, and , is the number of screening.

The EMD algorithm is essentially binary half-band filter, which is actually binary wavelet. In order to ensure the screening results of IMF in the amplitude and that frequency has enough physical meaning, which essentially is to ensure that scale ratio of the adjacent two IMF values is close to 2, the number of screening () must be limited in the process of screening.

The fixed sifting (iterating) 10 times are used to overcome the shortage of stopping criterion of empirical mode decomposition (EMD) in the processing of variable frequency EEG data, That is , which can ensure that scale ratio of the adjacent two IMF values is close to 2. The advantage of this selection is as follows [28]:(a)Ensure that the EMD algorithm is essentially binary half-band filter bank.(b)Ensure that the result has physical meaning.(c)The decomposition results number is limited <= .

(iii) The Optimization of Calculation Method for Instantaneous Frequency. In order to ensure that the instantaneous frequency based on Hilbert Transform (HT) has physical meaning, the conditions are as follows:(a)The result must be IMF.(b)The signal amplitude change is not too large (due to the Bedrosian theoretical limitation).(c)The phase of signal cannot be too complicated (due to the Nuttall theoretical limitation).

The GZC algorithm is based on the average concept in nature, so the GZC algorithm has the advantage of high stability and high precision of calculation. Therefore the GZC algorithm is very suitable for the fact that the frequency fluctuation of signal is not big. The variable frequency characteristics of high-frequency time series combination coding-based SSVEPs in this paper is segment-wise stationary; as a result, the generalized zero-crossing (GZC) is used to compute the instantaneous frequency of the proposed SSVEP respondent signals.

(iv) Local Spectrum Extreme Target Identification Algorithm. The relevant IMFs are evenly divided into 3 sections, then to respectively calculate their spectrum extreme . Then according to the following rules, is encoded.(a), is 1.(b), is 2.(c), is 3.

Finally, compare the coding result to the element in the encoded library; if the coding result is in the library, it is marked as nonzero; otherwise it is marked as zero. The of nonzero number is the correct identification number; as a result, the recognition rate is quantitatively calculated through the local spectrum extreme target identification algorithm. The recognition accuracy of computational formula is as follows:

(v) Differentiation Combination Method. In the experiment, the used fundamental frequency elements were only three high frequencies, that is, 25, 33.33, and 40Hz, which are labeled as 1, 2, and 3. The data lengths of 27 stimulus series were variably between 1.5s and 2.4s. The 27 stimulus series are averaged by superposition of 1~10 times. The recognition accuracy of the different superposition times average results is calculated through the local spectrum extreme target identification algorithm. Figure 8 shows the statistical result between the different superposition times average results, , and the recognition accuracy, .

From statistics results in Figure 8, we can see that the recognition accuracy is only 77.78% through 10 times superposition average. This is obviously not what we expected. The statistical result between the different superposition times average results and the recognition accuracy in Figure 8 shows that the recognition accuracy is different for all of the combinations of three stimulation units. Different combinations make a difference in recognition rate. Through the comparison and analysis, the recognition accuracy is larger through differentiation combination method of three stimulation units. The reason is that differentiation combination method can break the adaptability of the vision system [29], compared with single frequency, so the different frequency encoding (differentiation combination method) can improve the recognition rate. As a result, on the basis of the fundamental frequency elements, that is, 25, 33.33, and 40Hz, which are labeled as 1, 2, and 3, the differentiation combination encoding of 27 combination encoding mode is 123, 132, 213, 231, 312, and 321. Figure 10 shows SCCBH-SSVEP EEG single response signals of differentiation combination method. Figure 12 shows that the Hilbert spectra of the selections signals are presented by using HHT-based high-frequency time-modulated SSVEP method.

Each Oz-A1 EEG signal is firstly segmented into ten segments for synchronous averaging, which are to decrease the noise of EEG data. Next, the data in the frequency range of 2545Hz are chosen for EMD analysis, owing to the frequency range of three selected fundamental frequency elements, 25Hz, 33.33Hz, and 40Hz. According to EMD results, the relevant IMF is selected empirically for analysis. Then, the instantaneous frequency of the selected relevant IMF is calculated by using generalized zero-crossing algorithm [30]. At last, according to the calculated instantaneous frequency, the Hilbert spectrum is presented. A local spectrum extreme target identification algorithm is proposed to calculate recognition rate; the experimental result shows that the recognition rate is not high for all combination coding sequences. So the differentiation combination method is proposed to select the combination coding sequence in order to increase the recognition rate; as a result, 6 combination coding sequences are selected; they are 123, 132, 213, 231, 312, and 321 (fundamental frequency: 25, 33.33, and 40Hz, number resp.: 1, 2, and 3).

Figure 11 shows that the spectra of selected EEG signals, the frequency spectra of the selected EEG signals, were determined using the fast Fourier transform (FFT) technique in MATLAB software. In the spectrum in Figure 11, the lime lines represent 25Hz, the red lines represent 33.33Hz, and the springs green lines represent 40Hz. The spectrum frequency detection is usually the appropriate for frequency analysis of SSVEPs in BCI applications; but the FFT spectrum cannot contain temporal information. The 6 selections represented by three high frequencies elements cannot be distinguished from each other in the FFT spectrum. Figure 12 shows that the Hilbert spectra of the selected signals are presented by using HHT-based high-frequency time-modulated SSVEP method. Obviously, the Hilbert spectrum presents the time-frequency characteristics of flashing sequences based on high-frequency time series combination code using three high frequencies. Compared to FFT spectrum, it not only provides the frequency information but also presents frequency change with time, which extracts the features of Time Series Combination Code of three high frequencies. According to the combination code time-frequency characteristics, the Hilbert spectra (see Figure 12) obviously identify the selections.

Finally, six stimulus targets are presented with three high frequencies through HFCC-SSVEP; in contrast, three stimulus targets are presented with three low frequencies through traditional SSVEP; the above two kinds of different contrast experiments in Figure 13 are applied to intelligent wheelchair navigation control in order to verify the technical advantage of the proposed method and ensure the HFCC-SSVEP-based intelligent wheelchair navigation system efficiency and undamaging. Figure 13(a) shows the experimental paradigm of HFCC-SSVEP with 6 targets: 123, 132, 213, 231, 312, and 321 (fundamental frequency: 25, 33.33, and 40Hz, number resp.: 1, 2, and 3); they are presented on the screen. Figure 13(b) shows the experimental paradigm of the traditional SSVEP; the frequencies of 3 targets are, respectively, 12, 12.5, and 15Hz. Table 1 is the comparison of HFCC-SSVEP and traditional SSVEP; it is average experimental results of five subjects. In Table 1, is stimulation time, is the recognition rate, df is the degree of fatigue (fatigue time), and ITR is the information transmission rate of BCI system. Table 1 shows that HFCC-SSVEP has the advantage of shorter time, higher accuracy, not easy fatigue, and higher ITR, compared with the traditional SSVEP (Figure 14).

We studied SSVEP high frequency (beyond 25Hz) response of SSVEP whose paradigm is on the LED. Figure 3 shows that the 75Hz spectral line cannot be distinguished in the FFT spectrum. The SNR (signal to noise ratio) of high frequency (beyond 40Hz) response is very low, which has been unable to be distinguished through the traditional analysis method. So we must study the weak signal feature extraction method for SSVEP high frequency (beyond 40) response. This part of the research will be conducted in the future research work. As a result, 25Hz, 33.33Hz, and 40Hz in the high frequency area are selected as the fundamental frequency for HFCC-SSVEP paradigm.

The proposed HFCC-SSVEP paradigm for a BCI system not only can increase the number of targets through Time Series Combination Code of fewer frequencies for stimulation but also can shorten the recognition time. Furthermore, it can diminish users fatigue and risk of photosensitive epileptic seizures. This study specifies twenty-seven (33) selections using only 3 frequencies with about 2s EEG data. However, the targets are not distinguished from each other using traditional spectrum method. Consequently, in this case, IHT-based high-frequency time-modulated SSVEP feature extraction method, which is a nonlinear and nonstationary signal processing method, is proposed to extract the time-frequency characteristics of targets. As a result, the HFCC-SSVEP targets can be obviously distinguished from each other in the Hilbert spectrum. The proposed method helps increase the recognition efficiency of SSVEP for BCI systems. Results show that this High-Frequency Combination Code paradigm is suitable for the SSVEP BCI system.

Copyright 2015 Feng Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

recommendations for motion correction of infant fnirs data applicable to multiple data sets and acquisition systems - sciencedirect

recommendations for motion correction of infant fnirs data applicable to multiple data sets and acquisition systems - sciencedirect

Comparison of motion correction techniques on semi-simulated and real fNIRS infant data.Spline and wavelet combined outperform the individual use of these techniques.Spline and wavelet combined better recovered the true HRF in simulated data.Spline and wavelet combined had the best performance in motion artifact correction.Spline and wavelet combined saved nearly all corrupted trials across all datasets.

Despite motion artifacts are a major source of noise in fNIRS infant data, how to approach motion correction in this population has only recently started to be investigated. Homer2 offers a wide range of motion correction methods and previous work on simulated and adult data suggested the use of Spline interpolation and Wavelet filtering as optimal methods for the recovery of trials affected by motion. However, motion artifacts in infant data differ from those in adults both in amplitude and frequency of occurrence. Therefore, artifact correction recommendations derived from adult data might not be optimal for infant data. We hypothesized that the combined use of Spline and Wavelet would outperform their individual use on data with complex profiles of motion artifacts. To demonstrate this, we first compared, on infant semi-simulated data, the performance of several motion correction techniques on their own and of the novel combined approach; then, we investigated the performance of Spline and Wavelet alone and in combination on real cognitive data from three datasets collected with infants of different ages (5, 7 and 10 months), with different tasks (auditory, visual and tactile) and with different NIRS systems. To quantitatively estimate and compare the efficacy of these techniques, we adopted four metrics: hemodynamic response recovery error, within-subject standard deviation, between-subjects standard deviation and number of trials that survived each correction method. Our results demonstrated that (i) it is always better correcting for motion artifacts than rejecting the corrupted trials; (ii) Wavelet filtering on its own and in combination with Spline interpolation seems to be the most effective approach in reducing the between- and the within-subject standard deviations. Importantly, the combination of Spline and Wavelet was the approach providing the best performance in semi-simulation both at low and high levels of noise, also recovering most of the trials affected by motion artifacts across all datasets, a crucial result when working with infant data.

a real-time audio frequency cubic spline interpolator - sciencedirect

a real-time audio frequency cubic spline interpolator - sciencedirect

The implementation of a real-time cubic spline interpolator for audio frequency applications is presented. The interpolator is based on the parabolically blended cubic spline algorithm (Browne and Gaydecki, 1987; Rogers and Adams, 1990), and is implemented in a TMS320C25 DSP environment. Results based on both time domain data and spectral analyses are presented. These show that, for a PC data acquisition system already equipped with a DSP board, the interpolator can render the need for a hardware reconstruction filter at the output of a digital-to-analogue converter (DAC) unnecessary.

Vorgestellt wird die Realzeit-Implementierung eines Spline-Interpolators fr Audiofrequenz-Anwendungen. Der Interpolator beruht auf dem Algorithmus mit parabolisch aneinandergefgten kubischen Splines (Browne und Gaydecki, 1987; Rogers und Adams, 1990) und wird in einer DSP-Umgebung mit dem TMS320C25 realisiert. Ergebnisse auf der Grundlage sowohl von Zeitbereichsdaten als auch von Spektralanalysen werden vorgelegt. Sie zeigen, da sich fr ein PC-Datenerfassungssystem, das bereits mit einem DSP-Board ausgerstet ist, aufgrund des Interpolators das sonst hinter dem Digital-Analog-Umsetzer ntige Hardware-Rekonstruktionsfilter erbrigen kann.

L'article prsente l'implantation d'un interpolateur spline cubique en temps rel destin aux applications en frquence audio. L'interpolateur est bas sur l'algorithme des splines cubiques elanges paraboliquement (Browne et Gaydecki, 1987; Rogers et Adams, 1990), et est implant dans un environnement DSP TMS320C25. Des rsultats bass sur des donnes temporelles aussi bien que sur des analyses frquentielles sont prsents. Ils montrent que, pour un systme d'acquisition de donnes sur PC dj equip d'une carte DSP, l'interpolateur peut rendre inutile l'utilisation d'un filtre de reconstruction lectronique la sortie du convertisseur analogique-numrique (CAN).

Related Equipments