torch_fem.assemble
Element Assembler
- class ElementAssembler(quadrature_weights, quadrature_points, shape_val, projector, elements, edges, n_points)[source]
Bases:
ModuleThe
ElementAssembleris inheritated fromtorch.nn.Module. Therefore, all the operation fromtorch.nn.Moduleis applicable toElementAssemblerYou are not encouraged to build the ElementAssembler directly, instead, you should use
torch_fem.assemble.ElementAssembler.from_mesh()ortorch_fem.assemble.ElementAssembler.from_assembler()to build the ElementAssembler from a mesh- The output when calling the ElementAssembler is a sparse matrix, which is the global galerkin matrix of shape \(\mathbb R_{\text{sparse}}^{|\mathcal V|, |\mathcal V|}\) or \(\mathbb R_{\text{sparse}}^[|\mathcal V| imes H, |\mathcal V| imes H]\),
where \(H\) is the number of degree of freedom per point, \(|\mathcal V|\) is the number of points.
\[ \begin{align}\begin{aligned}K \overset{\text{bsr matrix}}{\leftarrow}\hat K_\text{global}\\\hat K_{\text{global}}^{nkl} = \mathcal P_{\mathcal E}^{nhij} \hat K_{\text{local}}^{hklij}\\\hat K_{ij} = \int_\Omega f(u, v) \text dv\end{aligned}\end{align} \]\(f\) is forward function which is defined by inheritating this class
\(\hat K_{\text{global}}\) : non zero value of the global galerkin matrix, \(K_{\text{global}}\in \mathbb R^{|\mathcal E|\times d\times d}\)
\(\hat K_{\text{local}}\) : local galerkin matrix for each element , \(K_{\text{local}}\in \mathbb R^{|\mathcal C|\times h\times h\times d\times d}\)
\(\mathcal P_{\mathcal E}\) : projection (assemble) tensor from \(\hat K_{\text{local}}\) to \(\hat K_{\text{global}}, `\mathcal P_{\mathcal E} \in \mathbb R_{\text{sparse}}^{|\mathcal E|\times |\mathcal C|\times h\times h}\)
\(\mathcal C\) : elements/cells
\(h\) : number of basis for each element/cell
\(\mathcal E\) : connections for nodes/vertices/points
\(\mathcal V\) : nodes/vertices/points
assemble the mass matrix
\[M_{ij} = \int_\Omega u_i v_j \text dv\]import torch_fem import torch_fem.functional as F class MassAssembler(torch_fem.ElementAssembler): def forward(self, u, v): return F.dot(u, v) mesh = torch_fem.Mesh.gen_rectangle() assembler = MassAssembler.from_mesh(mesh) M = assembler(mesh.points)
assemble the laplace matrix
\[K_{ij} = \int_\Omega \nabla u_i \cdot \nabla v_j \text dv\]import torch_fem import torch_fem.functional as F class LaplaceAssembler(torch_fem.ElementAssembler): def forward(self, gradu, gradv): return F.dot(gradu, gradv) mesh = torch_fem.Mesh.gen_circle() assembler = LaplaceAssembler.from_mesh(mesh) K = assembler(mesh.points)
- quadrature_weights
The element type is the key, which should be one of
torch_fem.shape.element_types(). Eachelement_typecorresponds to a 1D tensor of shape \([Q]\), where \(Q\) is the number of quadrature points the quadrature weights of each element type, e.g.{"triangle6": torch.tensor([0.5, 0.5])}- Type:
BufferDict[str, torch.Tensor]
- quadrature_points
The element type is the key, which should be one of
torch_fem.shape.element_types(). Eachelement_typecorresponds to a 2D tensor of shape \([Q, D]\), where \(Q\) is the number of quadrature points, \(D\) is the dimension of the domain the quadrature points of each element type, e.g.{"triangle6": torch.tensor([[0.5, 0.5], [0.5, 0.0]])}- Type:
BufferDict[str, torch.Tensor]
- shape_val
The element type is the key, which should be one of
torch_fem.shape.element_types(). Eachelement_typecorresponds to a 2D tensor of shape \([Q, B]\), where \(Q\) is the number of quadrature points, \(B\) is the number of basis the shape value of each element type, e.g.{"triangle6": torch.tensor([[0.5, 0.5, 0.0], [0.0, 0.5, 0.5]])}- Type:
BufferDict[str, torch.Tensor]
- projector
The element type is the key, which should be one of
torch_fem.shape.element_types(). achelement_typecorresponds to a projector from element to edge, each projector is atorch_fem.assemble.Projector()object, could be considered as a sparse matrix\[\mathcal P_e: \mathbb{R}_{\text{sparse}}^{|\mathcal C_e| \times B_e \times B_e} \rightarrow \mathbb{R}^{|\mathcal E|}\]where \(\mathcal C\) is the set of elements, \(B\) is the number of basis, \(\mathcal E\) is the set of edges.
- Type:
BufferDict[str, Projector]
- elements
The element type is the key, which should be one of
torch_fem.shape.element_types(). Eachelement_typecorresponds to a 2D tensor of shape \([|\mathcal C|, B]\), where \(\mathcal C\) is the set of elements, \(B\) is the number of basis the element connectivity of each element type, e.g.{"triangle6": torch.tensor([[0, 1, 2], [1, 2, 3]])}- Type:
BufferDict[str, torch.Tensor]
- edges
2D tensor of shape \([2, |\mathcal E|]\), where \(\mathcal E\) is the set of edges edge connectivity considering all element_types, e.g.
torch.tensor([[0, 1, 2], [1, 2, 3]])- Type:
- property device
- Returns:
the device of the assembler, either
torch.device("cpu")ortorch.device(f"cuda:{x}")- Return type:
- property dtype
- Returns:
the data type of the assembler, either
torch.float32ortorch.float64- Return type:
- type(dtype)[source]
Casts all parameters and buffers to
dst_type.Note
This method modifies the module in-place.
- Args:
dst_type (type or string): the desired type
- Returns:
Module: self
- abstract forward(**kwargs)[source]
The weak form of the operator, you should override this function. Similar to the
torch.nn.Module.forward()function, you can use :meth: torch_fem.assemble.ElementAssembler.__call__ to call this function- Parameters:
u (torch.Tensor, optional) – 1D tensor shape \([B]\), where \(B\) is the number of basis
v (torch.Tensor, optional) – 1D tensor shape \([B]\), where \(B\) is the number of basis
gradu (torch.Tensor, optional) – 2D tensor shape \([B,D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
gradv (torch.Tensor, optional) – 2D tensor shape \([B,D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
x (torch.Tensor, optional) – 2D tensor shape \([B, D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
gradx (torch.Tensor, optional) – 3D tensor shape \([B, D, D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
**point_data (Dict[str, torch.Tensor], optional) – The point_data are passed by __call__ if the point data
"example_key"passed in is of shape \([|\mathcal V|, ...]\), then the point data"example_key"passed in will be of shape \([B, ...]\), and the point data"gradexample_key"passed in will be of shape \([B, ..., D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
- Returns:
2D tensor of shape \([B,B]\) or 4D tensor of shape \([B, B, H, H]\), where \(B\) is the number of basis, \(H\) is the number of degree of freedom per point
- Return type:
- classmethod from_assembler(obj)[source]
Build an
torch_fem.assemble.ElementAssembler()from anothertorch_fem.assemble.ElementAssembler(). It’s much faster thantorch_fem.assemble.ElementAssembler.from_mesh(). When you already have an ElementAssembler, you can use this function to build another ElementAssembler sharig the same mesh- Parameters:
obj (torch_fem.assemble.ElementAssembler) – an meth:torch_fem.assemble.ElementAssembler object
- Returns:
the new element assembler sharing the same mesh
- Return type:
torch_fem.assemble.ElementAssembler
- classmethod from_mesh(mesh, quadrature_order=None)[source]
Build an
torch_fem.assemble.ElementAssembler()from a meshtorch_fem.mesh.Mesh(). It’s much slower thantorch_fem.assemble.ElementAssembler.from_assembler(). Because it will precompute the projection matrix $mathcal P_{mathcal E}$- Parameters:
mesh (torch_fem.mesh.mesh.Mesh) – a meth:torch_fem.mesh.Mesh object
quadrature_order (int or None) – the order should be poisitive integer, if
None, the quadrature order will be determined by thetorch_fem.quadrature.get_quadrature()
- Returns:
the new element assembler use the topology of the mesh
- Return type:
torch_fem.assemble.ElementAssembler
Node Assembler
- class NodeAssembler(quadrature_weights, quadrature_points, shape_val, projector, elements, n_points)[source]
Bases:
ModuleThe NodeAssembler is used to assemble the operator on the nodes of the mesh
The output when calling the NodeAssembler is a vector, which is of shape \([|\mathcal V|]\) or \([|\mathcal V|\times H]\), where \(|\mathcal V|\) is the number of nodes and \(H\) is the number of degrees of freedom per node.
- quadrature_weights
The element type is the key, which should be one of
torch_fem.shape.element_types(). Eachelement_typecorresponds to a 1D tensor of shape \([Q]\), where \(Q\) is the number of quadrature points` quadrature_weights of each element type- Type:
BufferDict[str, torch.Tensor]
- quadrature_points
The element type is the key, which should be one of
torch_fem.shape.element_types(). Eachelement_typecorresponds to a 2D tensor of shape \([Q, D]\), where \(Q\) is the number of quadrature points and \(D\) is the dimension of the mesh quadrature_points of each element type- Type:
BufferDict[str, torch.Tensor]
- shape_val
The element type is the key, which should be one of
torch_fem.shape.element_types(). Eachelement_typecorresponds to a 2D tensor of shape \([Q, B]\), where \(Q\) is the number of quadrature points and \(B\) is the number of basis functions shape_val of each element type- Type:
BufferDict[str, torch.Tensor]
- projector
The element type is the key, which should be one of
torch_fem.shape.element_types(). Eachelement_typecorresponds to a projector from element to nodes, each projector is atorch_fem.assemble.Projector()object, could be considered as a sparse matrix\[\mathcal P_e: \mathbb{R}_{\text{sparse}}^{|\mathcal C_e| \times B_e} \rightarrow \mathbb{R}^{|\mathcal V|}\]where \(\mathcal C\) is the set of elements, \(B\) is the number of basis, \(\mathcal V\) is the set of nodes/vertices/points.
projector from element to edge
- Type:
BufferDict[str, Projector]
- elements
The element type is the key, which should be one of
torch_feme.shape.element_types(). Eachelement_typecorresponds to a 2D tensor of shape \([N, B]\), where \(N\) is the number of elements and \(B\) is the number of basis functions element connectivity of each element type- Type:
BufferDict[str, torch.Tensor]
- type(dtype)[source]
Casts all parameters and buffers to
dst_type.Note
This method modifies the module in-place.
- Args:
dst_type (type or string): the desired type
- Returns:
Module: self
- abstract forward(*args)[source]
The weak form of the operator, you should override this function. Similar to the
torch.nn.Module.forward()function, you can use :method: torch_fem.assemble.ElementAssembler.__call__ to call this function- Parameters:
u (torch.Tensor, optional) – 1D tensor shape \([B]\), where \(B\) is the number of basis
v (torch.Tensor, optional) – 1D tensor shape \([B]\), where \(B\) is the number of basis
gradu (torch.Tensor, optional) – 2D tensor shape \([B,D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
gradv (torch.Tensor, optional) – 2D tensor shape \([B,D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
x (torch.Tensor, optional) – 2D tensor shape \([B, D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
gradx (torch.Tensor, optional) – 3D tensor shape \([B, D, D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
**point_data (Dict[str, torch.Tensor], optional) – The point_data are passed by __call__ if the point data
"example_key"passed in is of shape \([|\mathcal V|, ...]\), then the point data"example_key"passed in will be of shape \([B, ...]\), and the point data"gradexample_key"passed in will be of shape \([B, ..., D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
- Returns:
1D tensor of shape \([B]\) or 2D tensor of shape \([B, H]\), where \(B\) is the number of basis, \(H\) is the number of degree of freedom per point
- Return type:
- classmethod from_assembler(obj)[source]
Build an NodeAssembler from another
torch_fem.assemble.NodeAssembler()ortorch_fem.assemble.ElementAssembler(). It’s much faster thantorch_fem.assemble.NodeAssembler.from_mesh(). When you already have an NodeAssembler or ElementAssembler, you can use this function to build another NodeAssembler sharig the same mesh- Parameters:
obj (torch_fem.assemble.NodeAssembler or torch_fem.assemble.ElementAssembler) – an
torch_fem.assemble.NodeAssembler()ortorch_fem.assemble.ElementAssembler()object- Returns:
the new node_assembler sharing the same mesh
- Return type:
torch_fem.assemble.NodeAssembler
- classmethod from_mesh(mesh, quadrature_order=None)[source]
Build an
torch_fem.assemble.NodeAssembler()from a meshtorch_fem.mesh.Mesh(). It’s slower thantorch_fem.assemble.NodeAssembler.from_assembler(). Because it will precompute the projection matrix $mathcal P_{mathcal V}$- Parameters:
mesh (torch_fem.mesh.mesh.Mesh) – a meth:torch_fem.mesh.Mesh object
quadrature_order (int or None) – the order should be poisitive integer, if
None, the quadrature order will be determined by thetorch_fem.quadrature.get_quadrature()
- Returns:
the new node assembler use the topology of the mesh
- Return type:
torch_fem.assemble.NodeAssembler
Built-in Assemblers
- class LaplaceElementAssembler(quadrature_weights, quadrature_points, shape_val, projector, elements, edges, n_points)[source]
Bases:
ElementAssemblerThe element laplace assembler
\[K = \int_{\Omega}\nabla u \cdot \nabla v \mathrm{d}v\]- forward(gradu, gradv)[source]
The weak form of the operator, you should override this function. Similar to the
torch.nn.Module.forward()function, you can use :meth: torch_fem.assemble.ElementAssembler.__call__ to call this function- Parameters:
u (torch.Tensor, optional) – 1D tensor shape \([B]\), where \(B\) is the number of basis
v (torch.Tensor, optional) – 1D tensor shape \([B]\), where \(B\) is the number of basis
gradu (torch.Tensor, optional) – 2D tensor shape \([B,D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
gradv (torch.Tensor, optional) – 2D tensor shape \([B,D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
x (torch.Tensor, optional) – 2D tensor shape \([B, D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
gradx (torch.Tensor, optional) – 3D tensor shape \([B, D, D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
**point_data (Dict[str, torch.Tensor], optional) – The point_data are passed by __call__ if the point data
"example_key"passed in is of shape \([|\mathcal V|, ...]\), then the point data"example_key"passed in will be of shape \([B, ...]\), and the point data"gradexample_key"passed in will be of shape \([B, ..., D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
- Returns:
2D tensor of shape \([B,B]\) or 4D tensor of shape \([B, B, H, H]\), where \(B\) is the number of basis, \(H\) is the number of degree of freedom per point
- Return type:
- class MassElementAssembler(quadrature_weights, quadrature_points, shape_val, projector, elements, edges, n_points)[source]
Bases:
ElementAssemblerThe element mass assembler
\[K = \int_{\Omega} u v \mathrm{d}v\]- forward(u, v)[source]
The weak form of the operator, you should override this function. Similar to the
torch.nn.Module.forward()function, you can use :meth: torch_fem.assemble.ElementAssembler.__call__ to call this function- Parameters:
u (torch.Tensor, optional) – 1D tensor shape \([B]\), where \(B\) is the number of basis
v (torch.Tensor, optional) – 1D tensor shape \([B]\), where \(B\) is the number of basis
gradu (torch.Tensor, optional) – 2D tensor shape \([B,D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
gradv (torch.Tensor, optional) – 2D tensor shape \([B,D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
x (torch.Tensor, optional) – 2D tensor shape \([B, D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
gradx (torch.Tensor, optional) – 3D tensor shape \([B, D, D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
**point_data (Dict[str, torch.Tensor], optional) – The point_data are passed by __call__ if the point data
"example_key"passed in is of shape \([|\mathcal V|, ...]\), then the point data"example_key"passed in will be of shape \([B, ...]\), and the point data"gradexample_key"passed in will be of shape \([B, ..., D]\), where \(B\) is the number of basis, \(D\) is the dimension of the dimension
- Returns:
2D tensor of shape \([B,B]\) or 4D tensor of shape \([B, B, H, H]\), where \(B\) is the number of basis, \(H\) is the number of degree of freedom per point
- Return type: