soft.fuzzy.logic.inference package

Subpackages

Submodules

soft.fuzzy.logic.inference.abstract module

Defines an abstract representation of the fuzzy inference process utilized by fuzzy inference engines.

class soft.fuzzy.logic.inference.abstract.BaseInference(specs: Engine, consequences: Tensor | GroupedFuzzySets, links, offset)

Bases: ABC, Module

Implements the overall inference engine used in fuzzy logic control or neuro-fuzzy networks, but is an abstract class with an abstract method called calc_rules_applicability() that must be defined. Typically, this involves choosing what t-norm to use in aggregating a rule’s antecedents (e.g., minimum or product).

calc_intermediate_input(antecedents_memberships: tuple | Membership) Tensor

Calculates an intermediate output that is required for inference. Specifically, it changes the format of the antecedents’ memberships to:

(number of observations, number of inputs, number of rules).

Parameters:

antecedents_memberships – The antecedents’ memberships.

Returns:

(number of observations, number of inputs, number of rules)

Return type:

Intermediate memberships in the shape of

abstract calc_rules_applicability(antecedents_memberships: Membership) NoReturn

Abstract method for calculating the fuzzy logic rules’ applicability.

Parameters:

antecedents_memberships – The antecedents’ memberships.

Returns:

NotImplementedError as this is an abstract method.

soft.fuzzy.logic.inference.linkage module

This file helps support the linkage between layers necessary for fuzzy logic inference engines. For example, it contains the class that supports the Gumbel Softmax trick for discovering fuzzy logic rules’ premises.

Bases: Module

This class will implement the ‘standard’ neuro-fuzzy network definition, where connections or edges between layers of a neuro-fuzzy network can only have a value of either 0 or 1.

Disclaimer: This is useful when the neuro-fuzzy network architecture has been defined a priori to training, such as through the SelfOrganize functionality, but is not to be used when performing network morphism.

forward(*_) Tensor

Apply the defined binary linkage to the given membership degrees.

Note: This ‘forward’ function is primarily intended for use with membership degrees describing the relationship to the premise layer.

Returns:

The membership degrees having been appropriately unsqueezed and applied to their respective dimensions for later use in inferring rule activation.

Bases: Module

This class is a container for the various LogitLinks or BinaryLinks that are used to probabilistically sample from the fuzzy sets along some dimension. This class is defined as a torch.nn.Module for compatibility with torch.nn.ModuleList among other helpful necessities. More specifically, this class enables us to use other code that may expect torch.nn.Module functionality, such as GumbelSoftmax.

expand_logits_if_necessary(membership_degrees: Tensor, values: str = 'random', membership_dimension: int = -1) None

Determine if there is a mismatch between the incoming membership degrees and the currently stored logits. In other words, if it appears in the membership degrees that a fuzzy set has been introduced or created, then the logits probabilistically sampling from those fuzzy sets along some dimension will not be ‘aware’ of this new fuzzy set yet. As such, we need to expand the defined logits to account for this newly created fuzzy set.

Parameters:
  • membership_degrees – The membership degrees to some fuzzy sets.

  • values – Whether the new logits should follow a real-number value convention or binary

  • 'random' (value convention. If values is)

  • is (then real-number values will be used; this)

  • 'zero' (useful for when Gumbel Softmax trick is being applied. If values is)

  • then

  • used (binary values will be)

  • helpful (but all the values are initialized as zero. This is)

  • premises (when we are using predefined rule)

  • the (to perform this operation along the number of inputs dimension. This is typically)

  • However (premise layer that we must accommodate for.)

  • the

  • premise (newly added)

  • membership_dimension – The membership dimension under consideration. For example, whether

  • the

  • behavior (desired)

  • to (but which dimension refers to number of inputs may change from code)

  • code.

Returns:

None

forward(input_tensor: Tensor) Tensor

Fetch the logits for later use.

class soft.fuzzy.logic.inference.linkage.GumbelSoftmax(grouped_links: GroupedLinks, *args, **kwargs)

Bases: Module

This class will implement the Gumbel Softmax trick for discovering fuzzy logic rules’ premises.

forward(membership_degrees: Tensor, mask: Tensor) Tensor

Apply the Gumbel Softmax trick to determine the fuzzy logic rules’ premises. The class will expand the logits if necessary (i.e., to accomodate any recently introduced fuzzy sets in the premise layer). It will then determine a cutoff threshold, given the membership degrees to the premise layer, which will then assist in avoiding poor rule premise selection.

Parameters:
  • membership_degrees – The membership degrees to some fuzzy sets.

  • mask

Returns:

Bases: Module

This class stores the logit values which will later be used to calculate probabilities within a neural architecture. It simply returns the logits in the forward operation, but is defined as a torch.nn.Module for compatibility with torch.nn.ModuleList among other helpful necessities. More specifically, this class enables us to use other code that may expect torch.nn.Module functionality, such as GroupedFuzzySets.

To further elaborate, this class will implement the ‘non-standard’ neuro-fuzzy network definition, as proposed by J.W. Hostetter, where connections or edges between the layers of a neuro-fuzzy network have been relaxed such that they may now range along the real numbers. These real numbers correspond to logits, so that the resulting connections may then have a probabilistic interpretation. For example, instead of saying premise A for feature 1 is involved in some rule, we change this to premise A for feature 1 has a __ % probability of being involved in some rule.

Note: During training, we have the above-mentioned probabilistic interpretation, where there will be a probability of sampling some rule from the set of all possible/probable rules. However, during evaluation, this probabilistic interpretation is temporarily disabled in favor of a deterministic output (i.e., the outcomes with maximum probability).

Disclaimer: This is useful when the neuro-fuzzy network architecture has not been defined a priori to training, but is not to be used when using the SelfOrganize functionality.

forward(input_tensor) Parameter

Fetch the logits for later use.

Returns:

The parameter representing the involved logits.

Module contents