December 9, 2023

[This article was first published on DataGeeek, and kindly contributed to R-bloggers], (You can report content on this page here) Want to share your content on R-Bloggers? Click here if you have a blog, or click here if you don’t have one.

A recent debate in Turkey is that the Turkish government has suppressed the US Dollar/Turkish Lira exchange rates (USD/TRY) in order to prevent economic turmoil. Many authorities in trade, especially exporters, think that USD/TRY parity should be in the range of 24-25 Turkish Lira.

To see this, we’ll forecast for the full year and see if the rates are in a reasonable range. But first, we will model our data with the Bagged Multivariate Adaptive Regression Spline (MARS) via the Earth package. The predictors in our regression model are Turkey’s current account (account) and producer price index (PPI).

library(tideverse) library(tidmodels) library(lubridate) library(timetec) library(tsibble) library(modeltime) library(baguette) library(fable) library(plotly) library(ggtext) library(systemfonts) library(showtext) df % filter(type == “ca”) %> % mutate(date = #removing parenthesis and text inside case_when(str_detect(date,” \\(.*\\)”) ~ str_remove(date,” \\(. *\\)”), TRUE ~ date)) %>% mutate (date = parse_date(date, format = “%b%d, %y”) %>% # subtract 2 months from release date floor_date(“month “)% M-% month(2), value = str_remove(value, “B”) %>% as.numeric() ) %>% select(date, account = value) #Turkiye Producer Price Index (TRY) df_ppi % filter (type == “ppi) “) %> % mutate (date = # remove parentheses and text within case _ when (str_detect(date,” \\(.*\\)”) ~ str_remove(date,” \\ (* \\)”), TRUE ~ date)) %>% mutate (date = parse_date (date, format = “%b%d, %Y”) %>% # subtract 1 month from release date destination_date (“month”) %m-% month ( 1 ), value = str_remove(value, “B”) %>% as.numeric() ) %>% select(date, ppi = value) #USD/TRY – US Dollar Turkish Lira df_usdtry % filter ( type == “usdtry”) %> % mutate ( date = parse_date(date, format = “%m/%d/%Y” ), value = as.numeric(value) ) %> % select ( date, usdtry = value) # merge all datasets

See also  R | Using chatGPT in the teaching of R bloggers

Now that we’ve created our dataset, we can start modeling. Because the variables are in different scales, we normalize them all. And we have to convert categorical variables to numeric due to package/engine requirement.

# Splitting the data for train and test data splits % time_series_split (evaluation = “1 year”, cumulative = TRUE, date_var = date) df_train % step_timeseries_signature(date) #Preprocessing df_rec_tidy % step_rm(date)%>% step_rm(contains(“iso”),contains(“minutes”),contains(“hours”),contains(“am.pm”),contains(“xts”)) % >% step_zv(all_numeric_predictors()) %>% step_normalize(all_numeric_predictors()) %>% step_dummy( contains(“lbl”), one_hot = TRUE)

We perform hyperparameter tuning to find the optimal model for the data. As seen below, the degree of interaction is 2, which means there is one interaction term, and it uses the backward pruning method.

# df_proc processed data to determine the number of model terms %prep()%>%bak(new_data = NULL) (df_proc))) + 1, prod_degree = tune(), # points for tuning prune_method = tune()) %> % set_engine(“earth”) %>% set_mode(“regression”) %>% add_model(mars_spec) %>% add_recipe(df_rec_tidy) #cross-validation for resamples set.seed(12345) df_folds % tune_grid(df_folds, grid = hyper_grid, metrics = metric_set(rsq) ) #selecting the best parameter according to r-class mars_param_best % select(-.config) mars_param_best # A tibble: 1 x 2 # prod_degree prune_method # #1 2 backward

We can build our model with the best parameters we found earlier in the tuning process.

#final workflow with the objective of the best parameters final_df_wflow % add_model(mars_spec) %> % add_recipe(df_rec_tidy) %> % finalize_workflow(mars_param_best) # set the model table. seed(2023) model_table % fit(df_train) ) #calibration calibration_table % modeltime_calibrate(df_test) #accuracy calibration_table %> % modeltime_accuracy() %> %select(rmse, rsq) # one tibble: 1 x 2 # rmse rsq # #1 2.55 0.543

When we look at the accuracy results the coefficient of determination (rsq) looks low because the accuracy measures are calculated based on the test data in the calibration step, and the time range of our test data is small. But considering the target variable the RMSE looks fine.

See also  Hexa Raises $20.5 Million to Take Real Objects to the Metaverse

Before we start modeling, we create our future data set to use in the regression model as predictors. To do this we will use the automated ARIMA function. When we analyze the ARIMA model for PPI and account variables, we can see that they have annual seasonality.

#Future dataset dates for the next 12 months % mutate(date = yearmonth(date)) %>% as_tsibble() %>% new_data(12) %>% as_tibble() %>% mutate(date = as.Date(date)) ppi % mutate(date = yearmonth(date)) %>% as_tsibble() %>% model(ARIMA(ppi)) %>% Forecast(h = 12) %> % as_tibble() %>% select(ppi = .mean ) Account %mutated(date=yearmonth(date))%>%as_tsibble()%>%model(ARIMA(account))%>%forecast(h=12)%>%as_tibble()%>%select(account=.mean df_future %bind_col(ppi) %> %bind_col(account)

Finally, we can make actual values ​​from 2016 to 2023 and forecasts for the next 12 months.

#forecasting plot dataset set.seed(1983) df_plot % modeltime_refit(df_tidy) %> % modeltime_forecast(new_data = df_future, real_data = df_tidy %> % filter (year (date) > 2015)) # observational dataset df_actual % filter(.key == ‘real’) #predicted dataset df_pred % filter (.key == ‘predict’) #importing google Font_add_google(“Nunito”, “nuni ” ) showtext_auto() #Hoverinfo texts_actual % ggplot(aes(.index, .value)) + geom_area( data = . %> % filter(.key == “real”), fill = “#69b3a2”, alpha = 0.5) + geom_line( data = . text_actual), data = . %> %filter(.key == “actual”), color = “#69b3a2” ) + geom_area(data = . %> %filter(.key == “forecast”), fill = “#cf2765”, alpha = 0.5) + geom_line(data = .%> %filter(.key == “predict”), color = “#cf2765”) + geom_point(aes(text = text_pred), data = .%> %filter(.key = = “forecast”), color = “#cf2765” ) + scale_y_constant (range = c(0,30)) + lab (x = “”, y = “”, title = “The USD/TRY Rates from 2016 to 2023\nand Forecast for the next 12 months“) + theme_minimal() + theme(plot.title = element_markdown(hjst = 0.5, face = “bold”), plot.background = element_rect(fill = “#f3f6f4″, color = NA), panel.background = element_rect( fill = ” #f3f6f4″, color = NA))) -> p #setting font family for ggplotly font %style(hoverlabel=label)%>%layout(font=font) %>% # Remove plotly buttons from mode bar configuration (displaymodbar=false)

When we hover over the points, we can see that the April data is in line with the expectations of Turkish exporters and most economists. Hence, equality may explode in the near future, especially after the May 14 election.

See also  Verbund Lowers 2022 Outlook Despite Boost From Higher Electricity Prices

Connected

Source link