Page 52 - IJAMD-2-1
P. 52

International Journal of AI for
            Materials and Design
                                                                                     ML-based MPC for multizone BAC



            Table 3. Hyperparameters optimization methods for the PMV prediction models
            Model name                                    Hyperparameters tuned
            SVM        1.  Box constraint (C): Balances model complexity and training error. Higher values lead to better fit but risk overfitting.
                       2.  Kernel scale (σ): Determines the shape of the RBF kernel, controlling the decision boundary’s flexibility.
                       3.  Epsilon (ε): Specifies the tolerance level for prediction errors, influencing model robustness.
                       Bayesian optimization identified the optimal hyperparameters, with the kernel scale showing the most significant impact on accuracy.
            RF         1.  Number of trees: Determines the number of decision trees in the ensemble. More trees generally increase accuracy but also
                        computational cost.
                       2.  Minimum leaf size: Regulates the minimum number of samples in a leaf node to control model complexity and prevent overfitting.
                       3.  Number of predictors for splitting: Adjusted to optimize feature selection at each decision split.
                       Bayesian optimization was used for hyperparameter tuning, revealing that increasing the number of trees enhanced prediction
                       accuracy, while smaller leaf sizes improved granularity but increased training time.
            NARX neural  The NARX model structure involved determining the number of neurons in the hidden layer and lag lengths for inputs and outputs.
            network    Bayesian optimization was applied to identify the optimal combination of hyperparameters.
            LSTM       LSTM model training involved tuning the number of LSTM units and dropout rates to prevent overfitting and learning rates.
                       Bayesian optimization and trial-and-error methods were used to achieve optimal model configurations.
            Abbreviations: LSTM: Long short-term memory; NARX: Non-linear autoregressive network with exogenous inputs; PMV: Predicted mean vote;
            RBF: Radial basis function; RF: Random forest; SVM: Support vector machine.

            Table 4. Summary of ML model training

            ML model  MAE    Maximum     Computing cost (s)
                      (Test   prediction   Training  Prediction  Total
                     dataset)  error
            SVM       0.1041  0.4810   235      1 a    236
            RF        0.1040  0.4200    18 a    450    468
            NARX      0.0623 a  0.3140 a  96    2      98 a
            LSTM      0.0843  0.4830   840      26     866
                a
            Note:  Best results.
            Abbreviations: LSTM: Long short-term memory; MAE: Mean absolute
            error; ML: Machine learning; NARX: Non-linear autoregressive
            network with exogenous inputs; RF: Random forest; SVM: Support
            vector machine.
            applicability in scenarios characterized by strong temporal
            dependencies. 20
              The NARX model exhibited the best accuracy and
            generalization among all models, along with the lowest   Figure 4. Feature sensitivity analysis of predictors
                                                               Abbreviation: PMV: Predicted mean vote.
            computational  cost.  Potential  improvements  in  NARX’s
            accuracy could be achieved by increasing the number of
            hidden layers, but this would increase computational time.  with limited computational resources, NARX is preferable.
                                                               Conversely, for complex systems where accuracy is
              The LSTM model significantly outperformed the non-  paramount, LSTM is more appropriate. Given that NARX
            ANN models in PMV prediction, displaying substantially   achieved the lowest test and prediction errors, along with
            better prediction accuracy. However, it did not show an   the lowest combined training and prediction cost, it was
            advantage in generalization, as indicated by its maximum   selected for integration into the MPC framework in this
            prediction error. The LSTM model required the highest   study.
            training times due to the computational complexity of
            LSTM cells. Thus, its superior prediction accuracy came at   5. Results and discussion
            a higher computational cost.                       In this section, the performance of the MPC system is
              In summary, ANN models outperformed non-ANN      compared against the baseline BMS mode of operation.
            models in terms of accuracy. Among the ANN models,   Data collected over a 7-day period for both the BMS and
            the choice between NARX and LSTM depends on specific   MPC systems indicated that the statistical variations of
            application requirements. For real-time implementation   outdoor  temperature and  solar  radiation  were similar.


            Volume 2 Issue 1 (2025)                         46                             doi: 10.36922/ijamd.8161
   47   48   49   50   51   52   53   54   55   56   57