📌How to Derive the Hamiltonian for Phonons in a Crystal Lattice

27 Mar

\text{Kinetic Energy} = \frac{1}{2m} \sum_{n} \left(\sum_k P_k e^{ikna}\right) \left(\sum_{k'} P_{k'} e^{ik'na}\right)

\sum_{n} e^{i(k+k')na} = N \delta_{k,-k'}

\text{Potential Energy} = \frac{1}{2} \sum_{n} K (u_{n+1} - u_n)^2
u_n = \sum_k A_k e^{ikna}
u_{n+1} = \sum_k A_k e^{ik(n+1)a} = \sum_k A_k e^{ika} e^{ikna}
\text{Potential Energy} = \frac{1}{2} \sum_{n} K \left( \sum_k A_k e^{ika} e^{ikna} - \sum_k A_k e^{ikna} \right)^2
\text{Potential Energy} = \frac{1}{2} \sum_{n} K \left( \sum_k (e^{ika} - 1) A_k e^{ikna} \right)^2
\text{Potential Energy} = \frac{1}{2} K \sum_n \sum_k \sum_{k'} (e^{ika} - 1)(e^{-ika} - 1) A_k A_{k'} e^{i(k-k')na}
\sum_n e^{i(k-k')na} = N \delta_{k,k'}
\text{Potential Energy} = \frac{N}{2} K \sum_k |e^{ika} - 1|^2 A_k A_{-k}
\text{Potential Energy} = \frac{N}{2} \sum_k K (2 - 2\cos(ka)) A_k A_{-k}

Transformed Potential Energy Expression:

\text{Potential Energy} = \frac{1}{2} K \sum_n \left( \sum_k A_k e^{ika} e^{ikna} - \sum_k A_k e^{ikna} \right)^2
\text{Potential Energy} = \frac{1}{2} K \sum_n \left( \sum_k (e^{ika} - 1) A_k e^{ikna} \right)^2
\text{Potential Energy} = \frac{1}{2} K \sum_n \sum_k \sum_{k'} (e^{ika} - 1) A_k (e^{-ika} - 1) A_{k'} e^{i(k-k')na}
\sum_n e^{i(k-k')na} = N \delta_{k,k'}
\text{Potential Energy} = \frac{N}{2} K \sum_k |e^{ika} - 1|^2 A_k A_{-k}

This is further simplified using the identity |e^{ika} - 1|^2 = 2 - 2\cos(ka)|:

\text{Potential Energy} = \frac{N}{2} \sum_k K (2 - 2\cos(ka)) A_k A_{-k}
\text{Potential Energy} = \frac{N}{2} \sum_k 2K \sin^2\left(\frac{ka}{2}\right) A_k A_{-k}
\sum_n e^{i(k-k')na} = N \delta_{k,k'}
\text{Potential Energy} = \frac{1}{2} K N \sum_k |e^{ika} - 1|^2 A_k A_{-k}
|e^{ika} - 1|^2 = (e^{ika} - 1)(e^{-ika} - 1) = (e^{ika} - 1)(e^{-ika} - 1) = 2 - 2\cos(ka)
\text{Potential Energy} = \frac{N}{2} \sum_k K (2 - 2\cos(ka)) A_k A_{-k}
\text{Potential Energy} = \frac{N}{2} \sum_k 2K \sin^2\left(\frac{ka}{2}\right) A_k A_{-k}

Hamiltonian Expression:

H = \frac{N}{2} \sum_k \left(\frac{1}{m} P_k P_{-k} + 2K \sin^2\left(\frac{ka}{2}\right) A_k A_{-k} \right)

Simplified Hamiltonian with Angular Frequency:

We can introduce the angular frequency \omega_k​ for the phonon modes to simplify the expression:

\omega_k = \sqrt{\frac{2K}{m} \sin^2\left(\frac{ka}{2}\right)}
H = \frac{1}{2} \sum_k \left(\frac{N}{m} P_k P_{-k} + m \omega_k^2 A_k A_{-k} \right)

Conclusion

📌 Path Integral Series Overview: ➡️ Part 1: Introduction to Path Integrals. Deriving the Path Integral Formulation in Quantum Mechanics

10 Mar

Introduction: From Operator Formalism to Path Integrals


Step 1: The Quantum Propagator

\boldsymbol{K(x_b, t_b; x_a, t_a) = \langle x_b | e^{-i \hat{H} (t_b - t_a)/\hbar} | x_a \rangle.}

\boldsymbol{K(x_b, t_b; x_a, t_a) = \langle x_b | (e^{-i \hat{H} \epsilon/\hbar})^N | x_a \rangle.}

\boldsymbol{K(x_b, t_b; x_a, t_a) = \int dx_1 \cdots dx_{N-1} \langle x_b | e^{-i \hat{H} \epsilon/\hbar} | x_{N-1} \rangle \cdots \langle x_1 | e^{-i \hat{H} \epsilon/\hbar} | x_a \rangle.}


Step 2: Approximating the Time Evolution Operator

\boldsymbol{e^{-i \hat{H} \epsilon / \hbar} \approx e^{-i \hat{T} \epsilon / \hbar} e^{-i \hat{V} \epsilon / \hbar} + \mathcal{O}(\epsilon^2).}

  • \boldsymbol{\hat{T} = \frac{\hat{p}^2}{2m}}​ is the kinetic energy operator,
  • \boldsymbol{\hat{V} = V(\hat{x})} is the potential energy operator.

Step 3: Inserting the Identity in the Momentum Basis

\mathbb{1} = \int \frac{dp_j}{2\pi\hbar} | p_j \rangle \langle p_j |.

\boldsymbol{\langle x_{j+1} | e^{-i \hat{H} \epsilon / \hbar} | x_j \rangle = \int \frac{dp_j}{2\pi\hbar} \langle x_{j+1} | e^{-i \hat{T} \epsilon / \hbar} | p_j \rangle \langle p_j | e^{-i \hat{V} \epsilon / \hbar} | x_j \rangle.}


\boldsymbol{e^{-i \hat{T} \epsilon / \hbar} | p_j \rangle = e^{-i p_j^2 \epsilon / 2m\hbar} | p_j \rangle.}

\boldsymbol{\langle x_{j+1} | e^{-i \hat{T} \epsilon / \hbar} | p_j \rangle = e^{-i p_j^2 \epsilon / 2m\hbar} \langle x_{j+1} | p_j \rangle.}

\boldsymbol{\langle x | p_j \rangle = \frac{1}{\sqrt{2\pi\hbar}} e^{i p_j x / \hbar}.}

\boldsymbol{\langle x_{j+1} | p_j \rangle = \frac{1}{\sqrt{2\pi\hbar}} e^{i p_j x_{j+1} / \hbar}.}

\boldsymbol{\langle x_{j+1} | e^{-i \hat{T} \epsilon / \hbar} | p_j \rangle = \frac{1}{\sqrt{2\pi\hbar}} e^{-i p_j^2 \epsilon / 2m\hbar} e^{i p_j x_{j+1} / \hbar}.}

\boldsymbol{e^{-i \hat{V} \epsilon / \hbar} | x_j \rangle = e^{-i V(x_j) \epsilon / \hbar} | x_j \rangle.}

\boldsymbol{\langle p_j | e^{-i \hat{V} \epsilon / \hbar} | x_j \rangle = e^{-i V(x_j) \epsilon / \hbar} \langle p_j | x_j \rangle.}

\boldsymbol{\langle p_j | e^{-i \hat{V} \epsilon / \hbar} | x_j \rangle = \frac{1}{\sqrt{2\pi\hbar}} e^{-i V(x_j) \epsilon / \hbar} e^{-i p_j x_j / \hbar}.}


\boldsymbol{\langle x_{j+1} | e^{-i \hat{H} \epsilon / \hbar} | x_j \rangle = \int \frac{dp_j}{2\pi\hbar} e^{i p_j (x_{j+1} - x_j)/\hbar} e^{-i (p_j^2 / 2m + V(x_j)) \epsilon / \hbar}.}


\boldsymbol{K(x_b, t_b; x_a, t_a) = \lim_{N \to \infty} \int \prod_{j=1}^{N-1} dx_j \prod_{j=0}^{N-1} \frac{dp_j}{2\pi\hbar} e^{i \sum_{j=0}^{N-1} \left[ p_j (x_{j+1} - x_j) / \hbar - (p_j^2 / 2m + V(x_j)) \epsilon / \hbar \right]}.}

\boldsymbol{K(x_b, t_b; x_a, t_a) = \int \mathcal{D}x(t) \mathcal{D}p(t) e^{i \int_{t_a}^{t_b} dt , [p \dot{x} - H(p, x)] / \hbar}.}

\boldsymbol{K(x_b, t_b; x_a, t_a) = \int \mathcal{D}x(t) e^{i S[x] / \hbar},}

\boldsymbol{S[x] = \int_{t_a}^{t_b} dt \left( \frac{1}{2} m \dot{x}^2 - V(x) \right).}


  • Quantum Simulations: Many quantum computing algorithms, such as variational quantum eigensolvers (VQE) and quantum Monte Carlo methods, rely on path integral formulations to approximate quantum states.

Mathematical Utility

\boldsymbol{K(x_b, t_b; x_a, t_a) = \int \mathcal{D}x(t) \, e^{i S[x]/\hbar}}


\boldsymbol{G(x, x'; E) = \int \mathcal{D}x(t) e^{i S[x]/\hbar}}


\boldsymbol{K(E) = \int \mathcal{D}x(t) e^{i S[x]/\hbar}}


\boldsymbol{P(x,t) = \int \mathcal{D}x(t) e^{-S[x]/\hbar}}

# Reinstall necessary packages in Google Colab after execution state reset
!pip install numpy pandas matplotlib yfinance scipy

# Re-import required libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import yfinance as yf

# Fetch historical AAPL stock data from Yahoo Finance
ticker = "AAPL"
start_date = "2015-01-01"
end_date = "2024-03-01"
data = yf.download(ticker, start=start_date, end=end_date)

# Compute log returns
data['Log_Returns'] = np.log(data['Close'] / data['Close'].shift(1))
data.dropna(inplace=True)

# Define Geometric Brownian Motion (GBM) Simulation
def simulate_gbm(S0, mu, sigma, T, dt, N):
    np.random.seed(42)
    steps = int(T / dt)
    paths = np.zeros((steps, N))
    paths[0] = S0
    for t in range(1, steps):
        dW = np.random.normal(0, np.sqrt(dt), N)
        paths[t] = paths[t-1] * np.exp((mu - 0.5 * sigma**2) * dt + sigma * dW)
    return paths

# Estimate drift and volatility for GBM
mu = np.mean(data['Log_Returns'])  
sigma = np.std(data['Log_Returns'])  

# Backtest Parameters
initial_capital = 10000
T = 252  # 1 year trading days
dt = 1  # 1-day interval
N = 1000  # Monte Carlo Simulations

# Simulate GBM paths
predicted_paths = simulate_gbm(data['Close'].iloc[-1], mu, sigma, T, dt, N)
expected_price = np.mean(predicted_paths[-1])

# Backtesting: Buy if predicted price > current price
investment = initial_capital if expected_price > data['Close'].iloc[-1].item() else 0
final_value = investment * (expected_price / data['Close'].iloc[-1].item())

# Compute ROI
roi = (final_value - initial_capital) / initial_capital * 100

# Plot GBM Simulation
plt.figure(figsize=(12,6))
plt.plot(predicted_paths[:, :10], alpha=0.3)  
plt.axhline(y=expected_price, color='r', linestyle='--', label=f'Expected Price: ${expected_price:.2f}')
plt.xlabel("Days")
plt.ylabel("Price")
plt.title(f"Simulated AAPL Price Paths (GBM) - Expected Price: ${expected_price:.2f}")
plt.legend()
plt.show()

# Print Backtest Results
print(f"Initial Capital: ${initial_capital:,.2f}")
print(f"Final Portfolio Value: ${final_value:,.2f}")
print(f"Total ROI: {roi:.2f}%")

# Fixing unrealistic compounding issue in rolling GBM strategy

# Define rolling window size (e.g., 1 month ~ 21 trading days)
# Further optimization: Balanced reinvestment and stop-loss/profit-taking

# Final optimization: Smarter entries using a Moving Average (50-day) & adjusted risk management

# Compute the 50-day moving average for trend confirmation
data['50_MA'] = data['Close'].rolling(window=50).mean()

# Define rolling window size (e.g., 1 month ~ 21 trading days)
window_size = 21

# Initialize portfolio and tracking variables
portfolio_value = initial_capital
holdings = 0
investment_log = []

# Adjusted reinvestment rate (90% of portfolio reinvested per trade)
reinvestment_rate = 0.90

# Adjusted stop-loss and profit-taking thresholds
stop_loss_threshold = -0.10  # Exit if trade loses 10%
profit_take_threshold = 0.25  # Take profits if trade gains 25%

# Iterate over rolling windows
for i in range(window_size, len(data) - T, window_size):
    # Get historical data for the rolling window
    window_data = data.iloc[i - window_size:i]

    # Recalculate drift and volatility for the rolling period
    mu_rolling = np.mean(window_data['Log_Returns'])
    sigma_rolling = np.std(window_data['Log_Returns'])

    # Simulate future price paths for the next T days
    predicted_paths = simulate_gbm(window_data['Close'].iloc[-1], mu_rolling, sigma_rolling, T, dt, N)
    expected_price = np.mean(predicted_paths[-1]).item()
    last_close_price = window_data['Close'].iloc[-1].item()

    # Compute expected return percentage
    expected_return = (expected_price - last_close_price) / last_close_price

    # **New Condition: Only Buy If Price is Above 50-Day Moving Average**
    is_above_50_ma = last_close_price > data['50_MA'].iloc[i]

    # Decision: Buy if expected price is higher than the current price & above 50-day MA
    if expected_price > last_close_price and expected_return > stop_loss_threshold and is_above_50_ma:
        capital_allocated = portfolio_value * reinvestment_rate  # Invest 90% of portfolio
        holdings = capital_allocated / last_close_price

        # Apply stop-loss: exit trade if loss exceeds threshold
        if expected_return < stop_loss_threshold:
            holdings = 0  # Exit trade

        # Apply profit-taking: exit trade if gain exceeds threshold
        if expected_return > profit_take_threshold:
            holdings = 0  # Take profit

    else:
        holdings = 0

    # Update portfolio value at the end of the period with relaxed reinvestment constraints
    new_portfolio_value = holdings * expected_price if holdings > 0 else portfolio_value

    # Apply a moderate reinvestment cap to prevent excessive compounding (Max 20% growth per period)
    new_portfolio_value = min(new_portfolio_value, portfolio_value * 1.2)

    # Append to investment log
    investment_log.append((window_data.index[-1], new_portfolio_value))

    # Update portfolio value
    portfolio_value = new_portfolio_value

# Compute final ROI with optimized strategy using trend confirmation
roi_rolling_smart = (portfolio_value - initial_capital) / initial_capital * 100

# Create a dataframe for tracking investment performance
investment_df_smart = pd.DataFrame(investment_log, columns=['Date', 'Portfolio Value'])

# Plot optimized portfolio growth over time with trend confirmation
plt.figure(figsize=(12,6))
plt.plot(investment_df_smart['Date'], investment_df_smart['Portfolio Value'], label="Portfolio Value (Smart GBM Strategy)")
plt.axhline(y=initial_capital, color='r', linestyle='--', label="Initial Capital ($10,000)")
plt.xlabel("Date")
plt.ylabel("Portfolio Value ($)")
plt.title("Smart Portfolio Performance Using GBM with Trend Confirmation")
plt.legend()
plt.grid()
plt.show()

# Display final smart backtest results
rolling_backtest_results_smart = pd.DataFrame({
    "Initial Capital ($)": [initial_capital],
    "Final Portfolio Value ($)": [portfolio_value],
    "Smart ROI (%)": [roi_rolling_smart]
})

# ✅ Use standard Pandas display function for Google Colab compatibility
from IPython.display import display
display(rolling_backtest_results_smart)

# Print final optimized ROI explicitly
print("\n--- Smart Final ROI Summary ---")
print(f"Initial Capital: ${initial_capital:,.2f}")
print(f"Final Portfolio Value (Smart Strategy): ${portfolio_value:,.2f}")
print(f"Total ROI (Smart Strategy): {roi_rolling_smart:.2f}%")

###################
# Compute Sharpe Ratio for risk-adjusted return analysis

# Risk-free rate assumption (US Treasury Bonds yield ~3% annualized)
risk_free_rate = 0.03

# Convert annualized risk-free rate to daily
risk_free_daily = risk_free_rate / 252  

# Compute daily returns from investment log
investment_df_smart['Daily_Returns'] = investment_df_smart['Portfolio Value'].pct_change().dropna()

# Compute Sharpe Ratio
excess_returns = investment_df_smart['Daily_Returns'] - risk_free_daily
sharpe_ratio = excess_returns.mean() / excess_returns.std()
sharpe_ratio_annualized = sharpe_ratio * np.sqrt(252)  # Annualized Sharpe Ratio

# Display results
sharpe_results = pd.DataFrame({
    "Final Portfolio Value ($)": [portfolio_value],
    "Total ROI (%)": [roi_rolling_smart],
    "Sharpe Ratio (Annualized)": [sharpe_ratio_annualized]
})

from IPython.display import display
display(sharpe_results)

# Print Sharpe Ratio Summary
print("\n--- Sharpe Ratio Analysis ---")
print(f"Final Portfolio Value: ${portfolio_value:,.2f}")
print(f"Total ROI: {roi_rolling_smart:.2f}%")
print(f"Annualized Sharpe Ratio: {sharpe_ratio_annualized:.2f}")

# Interpretation:
# - Sharpe Ratio > 1.0: Good risk-adjusted returns.
# - Sharpe Ratio > 2.0: Excellent strategy.
# - Sharpe Ratio < 1.0: Returns may not sufficiently compensate for risk.

########## WITH FEES ###########
# Adding trading fees to the strategy (default: 0.1% per trade)
trading_fee_rate = 0.001  # 0.1% fee per transaction

# Initialize portfolio with trading fees
portfolio_value = initial_capital
holdings = 0
investment_log = []

# Iterate over rolling windows with fees
for i in range(window_size, len(data) - T, window_size):
    window_data = data.iloc[i - window_size:i]

    # Recalculate drift and volatility
    mu_rolling = np.mean(window_data['Log_Returns'])
    sigma_rolling = np.std(window_data['Log_Returns'])

    # Simulate future prices
    predicted_paths = simulate_gbm(window_data['Close'].iloc[-1], mu_rolling, sigma_rolling, T, dt, N)
    expected_price = np.mean(predicted_paths[-1]).item()
    last_close_price = window_data['Close'].iloc[-1].item()

    # Compute expected return percentage
    expected_return = (expected_price - last_close_price) / last_close_price

    # Check if price is above 50-day Moving Average
    is_above_50_ma = last_close_price > data['50_MA'].iloc[i]

    # Decision: Buy if expected price is higher & above 50-day MA
    if expected_price > last_close_price and expected_return > stop_loss_threshold and is_above_50_ma:
        capital_allocated = portfolio_value * reinvestment_rate
        holdings = capital_allocated / last_close_price

        # Apply stop-loss and profit-taking
        if expected_return < stop_loss_threshold or expected_return > profit_take_threshold:
            holdings = 0  

        # Apply trading fee for entering a trade
        portfolio_value -= capital_allocated * trading_fee_rate

    else:
        holdings = 0

    # Update portfolio value, subtracting exit fees
    new_portfolio_value = holdings * expected_price if holdings > 0 else portfolio_value
    new_portfolio_value -= new_portfolio_value * trading_fee_rate  # Apply exit fee

    # Cap growth per rolling window
    new_portfolio_value = min(new_portfolio_value, portfolio_value * 1.2)

    # Append to investment log
    investment_log.append((window_data.index[-1], new_portfolio_value))

    # Update portfolio value
    portfolio_value = new_portfolio_value

# Compute final ROI with fees
roi_rolling_fees = (portfolio_value - initial_capital) / initial_capital * 100

# Compute Sharpe Ratio with fees
investment_df_fees = pd.DataFrame(investment_log, columns=['Date', 'Portfolio Value'])
investment_df_fees['Daily_Returns'] = investment_df_fees['Portfolio Value'].pct_change().dropna()

# Compute excess returns & Sharpe Ratio with fees
excess_returns_fees = investment_df_fees['Daily_Returns'] - risk_free_daily
sharpe_ratio_fees = excess_returns_fees.mean() / excess_returns_fees.std()
sharpe_ratio_annualized_fees = sharpe_ratio_fees * np.sqrt(252)  

# Display final results with fees
backtest_results_fees = pd.DataFrame({
    "Initial Capital ($)": [initial_capital],
    "Final Portfolio Value ($)": [portfolio_value],
    "Total ROI with Fees (%)": [roi_rolling_fees],
    "Sharpe Ratio (Annualized, With Fees)": [sharpe_ratio_annualized_fees]
})

from IPython.display import display
display(backtest_results_fees)

# Print Summary
print("\n--- Final Backtest Results (Including Trading Fees) ---")
print(f"Initial Capital: ${initial_capital:,.2f}")
print(f"Final Portfolio Value (With Fees): ${portfolio_value:,.2f}")
print(f"Total ROI (With Fees): {roi_rolling_fees:.2f}%")
print(f"Sharpe Ratio (With Fees, Annualized): {sharpe_ratio_annualized_fees:.2f}")

##############################################

# Optimizing trading fees impact with dynamic trade sizing and risk-adjusted entries

# Define dynamic fee structure
def get_trading_fee(trade_size):
    """ Variable trading fee: Lower for larger trades """
    if trade_size < 5000:
        return 0.001  # 0.1% fee for small trades
    else:
        return 0.0005  # 0.05% fee for large trades

# Calculate ATR (Average True Range) for dynamic stop-loss/profit-taking
data['High-Low'] = data['High'] - data['Low']
data['High-Close'] = np.abs(data['High'] - data['Close'].shift(1))
data['Low-Close'] = np.abs(data['Low'] - data['Close'].shift(1))
data['True Range'] = data[['High-Low', 'High-Close', 'Low-Close']].max(axis=1)
data['ATR'] = data['True Range'].rolling(window=14).mean()

# Initialize portfolio and tracking variables
portfolio_value = initial_capital
holdings = 0
investment_log = []

# Iterate over rolling windows with optimized fee & risk management
for i in range(window_size, len(data) - T, window_size):
    window_data = data.iloc[i - window_size:i]

    # Recalculate drift and volatility
    mu_rolling = np.mean(window_data['Log_Returns'])
    sigma_rolling = np.std(window_data['Log_Returns'])

    # Simulate future prices
    predicted_paths = simulate_gbm(window_data['Close'].iloc[-1], mu_rolling, sigma_rolling, T, dt, N)
    expected_price = np.mean(predicted_paths[-1]).item()
    last_close_price = window_data['Close'].iloc[-1].item()

    # Compute expected return percentage
    expected_return = (expected_price - last_close_price) / last_close_price

    # Check if price is above 50-day MA & get ATR for volatility-adjusted stop-loss
    is_above_50_ma = last_close_price > data['50_MA'].iloc[i]
    current_atr = data['ATR'].iloc[i]

    # Dynamic stop-loss and profit-taking (adjusting based on volatility)
    stop_loss_dynamic = -current_atr / last_close_price  # ATR-based stop-loss
    profit_take_dynamic = current_atr * 2 / last_close_price  # ATR-based profit target

    # Decision: Buy if expected price is higher & above 50-day MA
    if expected_price > last_close_price and expected_return > stop_loss_dynamic and is_above_50_ma:
        capital_allocated = portfolio_value * reinvestment_rate  # Invest 90% of portfolio
        trade_fee = capital_allocated * get_trading_fee(capital_allocated)  # Get dynamic fee
        capital_allocated -= trade_fee  # Deduct entry fee
        holdings = capital_allocated / last_close_price

        # Apply stop-loss and profit-taking
        if expected_return < stop_loss_dynamic or expected_return > profit_take_dynamic:
            holdings = 0  # Exit trade

    else:
        holdings = 0

    # Update portfolio value, subtracting exit fees
    new_portfolio_value = holdings * expected_price if holdings > 0 else portfolio_value
    exit_fee = new_portfolio_value * get_trading_fee(new_portfolio_value)
    new_portfolio_value -= exit_fee  # Apply exit fee

    # Cap growth per rolling window (max 20% per period)
    new_portfolio_value = min(new_portfolio_value, portfolio_value * 1.2)

    # Append to investment log
    investment_log.append((window_data.index[-1], new_portfolio_value))

    # Update portfolio value
    portfolio_value = new_portfolio_value

# Compute final ROI with optimized fee & trade management
roi_rolling_final = (portfolio_value - initial_capital) / initial_capital * 100

# Compute Sharpe Ratio with optimized strategy
investment_df_final = pd.DataFrame(investment_log, columns=['Date', 'Portfolio Value'])
investment_df_final['Daily_Returns'] = investment_df_final['Portfolio Value'].pct_change().dropna()

# Compute excess returns & Sharpe Ratio with fees
excess_returns_final = investment_df_final['Daily_Returns'] - risk_free_daily
sharpe_ratio_final = excess_returns_final.mean() / excess_returns_final.std()
sharpe_ratio_annualized_final = sharpe_ratio_final * np.sqrt(252)  

# Display final results with optimized trading fee management
backtest_results_final = pd.DataFrame({
    "Initial Capital ($)": [initial_capital],
    "Final Portfolio Value ($)": [portfolio_value],
    "Total ROI (Final Optimized %)": [roi_rolling_final],
    "Sharpe Ratio (Final Optimized, Annualized)": [sharpe_ratio_annualized_final]
})

from IPython.display import display
display(backtest_results_final)

# Print Final Optimized Backtest Summary
print("\n--- Final Optimized Backtest Results (With Dynamic Trading Fees & Risk Management) ---")
print(f"Initial Capital: ${initial_capital:,.2f}")
print(f"Final Portfolio Value (Optimized): ${portfolio_value:,.2f}")
print(f"Total ROI (Optimized Strategy): {roi_rolling_final:.2f}%")
print(f"Sharpe Ratio (Optimized, Annualized): {sharpe_ratio_annualized_final:.2f}")

#############################

# Fetch historical data for BTC and TSLA from Yahoo Finance for comparison
btc_ticker = "BTC-USD"
tsla_ticker = "TSLA"
start_date = "2015-01-01"
end_date = "2024-03-01"

# Download data for BTC and TSLA
btc_data = yf.download(btc_ticker, start=start_date, end=end_date)
tsla_data = yf.download(tsla_ticker, start=start_date, end=end_date)

# Compute log returns for BTC and TSLA
btc_data['Log_Returns'] = np.log(btc_data['Close'] / btc_data['Close'].shift(1))
btc_data.dropna(inplace=True)

tsla_data['Log_Returns'] = np.log(tsla_data['Close'] / tsla_data['Close'].shift(1))
tsla_data.dropna(inplace=True)

# Calculate 200-day Moving Average for trend-based filtering
btc_data['200_MA'] = btc_data['Close'].rolling(window=200).mean()
tsla_data['200_MA'] = tsla_data['Close'].rolling(window=200).mean()

# Function to backtest strategy on a given stock
def backtest_strategy(data, asset_name):
    global portfolio_value

    # Reinitialize portfolio
    portfolio_value = initial_capital
    holdings = 0
    investment_log = []

    # Iterate over rolling windows
    for i in range(window_size, len(data) - T, window_size):
        window_data = data.iloc[i - window_size:i]

        # Recalculate drift and volatility
        mu_rolling = np.mean(window_data['Log_Returns'])
        sigma_rolling = np.std(window_data['Log_Returns'])

        # Simulate future price paths
        predicted_paths = simulate_gbm(window_data['Close'].iloc[-1], mu_rolling, sigma_rolling, T, dt, N)
        expected_price = np.mean(predicted_paths[-1]).item()
        last_close_price = window_data['Close'].iloc[-1].item()

        # Compute expected return percentage
        expected_return = (expected_price - last_close_price) / last_close_price

        # Check if price is above 200-day Moving Average
        is_above_200_ma = last_close_price > data['200_MA'].iloc[i]

        # ATR for volatility-adjusted stop-loss/profit-taking
        current_atr = data['ATR'].iloc[i] if 'ATR' in data.columns else 0.02  # Default ATR if missing
        stop_loss_dynamic = -current_atr / last_close_price  # ATR-based stop-loss
        profit_take_dynamic = current_atr * 2 / last_close_price  # ATR-based profit target

        # Decision: Buy if expected price is higher & above 200-day MA
        if expected_price > last_close_price and expected_return > stop_loss_dynamic and is_above_200_ma:
            capital_allocated = portfolio_value * reinvestment_rate  # Invest 90% of portfolio
            trade_fee = capital_allocated * get_trading_fee(capital_allocated)  # Get variable fee
            capital_allocated -= trade_fee  # Deduct entry fee
            holdings = capital_allocated / last_close_price

            # Apply stop-loss and profit-taking
            if expected_return < stop_loss_dynamic or expected_return > profit_take_dynamic:
                holdings = 0  # Exit trade

        else:
            holdings = 0

        # Update portfolio value, subtracting exit fees
        new_portfolio_value = holdings * expected_price if holdings > 0 else portfolio_value
        exit_fee = new_portfolio_value * get_trading_fee(new_portfolio_value)
        new_portfolio_value -= exit_fee  # Apply exit fee

        # Cap growth per rolling window (max 20% per period)
        new_portfolio_value = min(new_portfolio_value, portfolio_value * 1.2)

        # Append to investment log
        investment_log.append((window_data.index[-1], new_portfolio_value))

        # Update portfolio value
        portfolio_value = new_portfolio_value

    # Compute final ROI
    roi_final = (portfolio_value - initial_capital) / initial_capital * 100

    # Compute Sharpe Ratio
    investment_df = pd.DataFrame(investment_log, columns=['Date', 'Portfolio Value'])
    investment_df['Daily_Returns'] = investment_df['Portfolio Value'].pct_change().dropna()
    excess_returns = investment_df['Daily_Returns'] - risk_free_daily
    sharpe_ratio = excess_returns.mean() / excess_returns.std()
    sharpe_ratio_annualized = sharpe_ratio * np.sqrt(252)

    # Store results
    results = pd.DataFrame({
        "Asset": [asset_name],
        "Initial Capital ($)": [initial_capital],
        "Final Portfolio Value ($)": [portfolio_value],
        "Total ROI (%)": [roi_final],
        "Sharpe Ratio (Annualized)": [sharpe_ratio_annualized]
    })

    return results

# Run strategy on BTC & TSLA
btc_results = backtest_strategy(btc_data, "Bitcoin (BTC)")
tsla_results = backtest_strategy(tsla_data, "Tesla (TSLA)")

# Combine results
final_results = pd.concat([btc_results, tsla_results], ignore_index=True)

# Display final results
from IPython.display import display
display(final_results)

# Print summary
print("\n--- Strategy Backtest Results on BTC & TSLA ---")
for index, row in final_results.iterrows():
    print(f"Asset: {row['Asset']}")
    print(f"  Final Portfolio Value: ${row['Final Portfolio Value ($)']:,.2f}")
    print(f"  Total ROI: {row['Total ROI (%)']:.2f}%")
    print(f"  Sharpe Ratio: {row['Sharpe Ratio (Annualized)']:.2f}\n")


\boldsymbol{S[x] = \int dt \, L(x, \dot{x})}

A New Kind of Science: Why We Should Work With Nature, Not Against It

9 Mar

The Case of the Czech Beavers: Nature’s Engineers


Introduction: Transitioning from the Old Science to a New Paradigm of Collaboration with Nature

The Old Paradigm: Dominance and Control

The New Paradigm: Collaboration and Resilience


Principles of This New Science


Why This Matters Now



Nature-Based Solutions: Harnessing Ecosystem Services


Beavers as Ecological Engineers


Rewilding: Restoring Balance Through Collaboration


Indigenous Knowledge: Learning from Reciprocal Relationships


Scientific Frameworks: Building on Nature’s Blueprint


1. Principles for Coupling Engineering & Nature


2. Mathematical Tools to Quantify Nature’s Value


3. Economic Strategies for Governments

  • Policies:

4. Policy Frameworks to Align Growth & Ecology

  • Initiatives:

5. Case Studies: Success Stories

a) The Netherlands’ “Building with Nature”

b) China’s Sponge Cities

c) New Zealand’s “Well-Being Budget”


Final Thought

The Czech beaver story serves as a powerful reminder. Animals and ecosystems are not passive resources. They are active collaborators in shaping our world. By tapping into their inherent skills—whether it’s beaver engineering, wolf trophic cascades, or plant pollination—we can reduce costs. We can enhance resilience and foster biodiversity.

Further Reading:

Unraveling the Mysteries of Radioactivity: Simon Shnoll’s Groundbreaking Research – 2

3 Mar

The Initial Discovery

A Reluctant Return

Expanding the Scope of Research

The Role of Energy

Synchronization of Measurements

The Earth and Celestial Influence

Collaborative Experiments and Validation

Cross-Continental Experiments

Scientific Implications

Future Directions

Conclusion

The Discovery of Neptune: How Newtonian Physics Predicted a New Planet

25 Feb

In the mid-19th century, the discovery of Neptune marked one of the most profound triumphs of Newtonian physics. It was the first time in history that a planet’s existence was predicted purely through mathematical reasoning and without direct observation. This achievement not only demonstrated the power of classical mechanics but also had profound philosophical implications about the nature of the universe and humanity’s ability to understand it.


The Discovery of Neptune: A Triumph of Newtonian Physics

In the mid-19th century, the discovery of Neptune marked one of the most profound triumphs of Newtonian physics. For the first time in history, a planet’s existence was predicted purely through mathematical reasoning and without direct observation. This achievement not only demonstrated the power of classical mechanics but also had profound philosophical implications about the nature of the universe and humanity’s ability to understand it.

The Problem: Uranus’ Unexplained Orbital Anomalies

When William Herschel discovered Uranus in 1781, astronomers initially assumed its orbit would follow the laws of celestial mechanics as described by Isaac Newton. However, by the early 19th century, observations revealed discrepancies between Uranus’ observed position and where it should have been according to calculated orbits based on Newton’s laws.

These deviations suggested that some unknown force was acting upon Uranus. Two hypotheses emerged:

  1. Newton’s laws might be incomplete or incorrect.
  2. Another massive object—another planet—might exist beyond Uranus, exerting gravitational influence on its orbit.

Urbain Le Verrier and John Couch Adams pursued the second hypothesis using rigorous mathematical methods.

The Mathematics Behind the Prediction

Step 1: Gravitational Perturbations

Newton’s law of universal gravitation states: F=\frac{Gm_1m_2}{r^2}, where F is the gravitational force, G is the gravitational constant, m_1​ and m_2​ are the masses of two bodies, and r is the distance between them.

For planets orbiting the Sun, their motion can be approximated using Kepler’s laws, which are derived from Newton’s equations. However, when additional bodies (like other planets) are present, their gravitational effects cause small perturbations in the primary body’s orbit.

To account for this, we use the following equation for the acceleration due to perturbations:

\mathbf{a}_{\text{perturbation}}(\mathbf{r}) = -\frac{G \sum_i m_i}{|\mathbf{r}_i - \mathbf{r}|^2}.​

Here:

  • r is the position vector of the planet being perturbed (Uranus).
  • $latex r_i$​ is the position vector of the perturbing body (the hypothetical new planet).

By analyzing Uranus’ anomalous orbit, both Le Verrier and Adams aimed to deduce the location, mass, and trajectory of the unseen perturbing body.

Step 2: Observational Data and Initial Assumptions

Using decades of precise astronomical measurements, Le Verrier and Adams constructed tables of Uranus’ positions over time. They compared these with predictions made under the assumption of no external influences. The differences allowed them to compute the net perturbing force.

They assumed:

  • The perturbing body followed an elliptical orbit around the Sun.
  • Its mass was significant enough to affect Uranus noticeably but not so large as to dominate the solar system’s dynamics.

From these assumptions, they derived differential equations describing the perturbed motion of Uranus.

Step 3: Solving the Equations for Neptune’s Position

The key equation governing the motion of Uranus under the influence of both the Sun and a hypothetical perturbing planet is: \mathbf{r}'' + \frac{\mu}{r^3} \mathbf{r} = \mathbf{f}_{\text{perturbation}}

Where:

  • \mu = G M_{\odot}​ is the standard gravitational parameter of the Sun.
  • f_{\text{perturbation}} represents the gravitational force exerted by the hypothetical planet.

To solve this system, we need to break it into manageable components. Here’s how Le Verrier and Adams approached it step-by-step:

  1. Decomposing the Problem(a) Orbital Dynamics Without Perturbations: If there were no perturbing forces (fperturbation​=0), Uranus would follow a Keplerian orbit governed by:

\mathbf{r}'' + \frac{\mu}{r^3} \mathbf{r} = 0: This equation describes an elliptical orbit with parameters such as semi-major axis a , eccentricity e , inclination i , etc., all derived from observational data.

Including Perturbations: When a second body (the hypothetical planet) exerts a gravitational force on Uranus, the total acceleration becomes:

\mathbf{a}_{\text{total}} = \mathbf{a}{\text{Sun}} + \mathbf{a}_{\text{perturbation}}

Here:

\mathbf{a}_{\text{perturbation}} = \frac{\mathbf{f}_{\text{perturbation}}}{m_{\text{Uranus}}}

Thus, the full equation becomes:

\mathbf{r}'' + \frac{\mu}{r^3} \mathbf{r} = \mathbf{f}_{\text{perturbation}}

Our goal is to determine the properties of the perturbing planet (position, mass, orbit) that best explain the observed deviations in Uranus′orbit.

2. Expressing the Perturbing Force: The perturbing force arises from the gravitational attraction between Uranus and the hypothetical planet. Using Newton′s law of gravitation, the force is:

\mathbf{f}_{\text{perturbation}} = -\frac{G m_{\text{planet}}}{|\mathbf{r}_{\text{planet}} - \mathbf{r}_{\text{Uranus}}|^3} (\mathbf{r}_{\text{planet}} - \mathbf{r}_{\text{Uranus}})

Where: – m_{\text{planet}} is the mass of the hypothetical planet. – \mathbf{r}_{\text{planet}} is the position vector of the hypothetical planet. – \mathbf{r}_{\text{Uranus}} is the position vector of Uranus. Substituting this expression into the main equation gives:

\mathbf{r}'' + \frac{\mu}{r^3} \mathbf{r} = -\frac{G m_{\text{planet}}}{|\mathbf{r}{\text{planet}} - \mathbf{r}{\text{Uranus}}|^3} (\mathbf{r}{\text{planet}} - \mathbf{r}{\text{Uranus}})

3. Iterative Solution Method

Solving this equation analytically is extremely challenging due to its complexity. Instead, Le Verrier and Adams used numerical methods and iterative approximations. Here’s how they proceeded:

(a) Initial Guesses: They started with initial guesses for the hypothetical planet’s: – Mass (m_{\text{planet}} ) – Semi-major axis (a_{\text{planet}} ) – Eccentricity (e_{\text{planet}} ) – Inclination (i_{\text{planet}} ) These parameters define the planet’s orbit.

(b) Computing Perturbations: Using the guessed orbital parameters, they computed the perturbing force at various points along Uranus’ orbit. This required calculating: – The relative position vector (\mathbf{r}_{\text{planet}} - \mathbf{r}_{\text{Uranus}} ) – The magnitude of the perturbing force using the formula above.

(c) Comparing Predictions to Observations: With the perturbing force calculated, they solved the differential equation numerically to predict Uranus’ trajectory. They then compared their predictions to actual observations of Uranus’ position over time.

(d) Adjusting Parameters: If the predicted trajectory did not match observations, they adjusted the hypothetical planet’s parameters (mass, orbit, etc.) and repeated the process. This iteration continued until the model closely matched the observed anomalies in Uranus’ orbit.

4. Key Mathematical Steps: Let’s walk through the mathematical details of one iteration:

(a) Relative Position Vector: At any given time t , the positions of Uranus and the hypothetical planet are:

\mathbf{r}_{\text{Uranus}}(t) = (x_U, y_U, z_U)

\mathbf{r}_{\text{planet}}(t) = (x_P, y_P, z_P)

The relative position vector is:

\Delta \mathbf{r}(t) = \mathbf{r}{\text{planet}}(t) - \mathbf{r}{\text{Uranus}}(t)

Its magnitude is:

|\Delta \mathbf{r}(t)| = \sqrt{(x_P - x_U)^2 + (y_P - y_U)^2 + (z_P - z_U)^2}

(b)PerturbingForce: The perturbing force is:

\mathbf{f}{\text{perturbation}}(t) = -\frac{G m{\text{planet}}}{|\Delta \mathbf{r}(t)|^3} \Delta \mathbf{r}(t)

Breaking this into components:

f_x = -\frac{G m_{\text{planet}}}{|\Delta \mathbf{r}|^3} (x_P - x_U)

f_y = -\frac{G m_{\text{planet}}}{|\Delta \mathbf{r}|^3} (y_P - y_U)

f_z = -\frac{G m_{\text{planet}}}{|\Delta \mathbf{r}|^3} (z_P - z_U)

(c)Updating Uranus′Orbit: The perturbed motion of Uranus is governed by:

x_U'' + \frac{\mu}{r^3} x_U = f_x

y_U'' + \frac{\mu}{r^3} y_U = f_y

z_U'' + \frac{\mu}{r^3} z_U = f_z

These equations can be solved numerically using techniques like Euler’s method or Runge-Kutta methods.

5. Final Solution: After many iterations, Le Verrier and Adams arrived at the following approximate values for the hypothetical planet (Neptune):

– Mass: ~17 times Earth’s mass

– Distance from the Sun: ~30 AU

– Orbital period: ~165 years

These results matched subsequent telescopic observations almost perfectly.

Step 4: Results and Verification: Both Le Verrier and Adams independently concluded that a planet existed roughly 30 AU from the Sun (about twice the distance of Uranus). Le Verrier sent his calculations to Johann Galle at the Berlin Observatory, who pointed his telescope toward the predicted location—and found Neptune within hours! Adams’ work was less publicized initially, leading to controversy over credit. Nevertheless, both men demonstrated the predictive power of Newtonian mechanics.

Le Verrier’s Error with Vulcan: Encouraged by his success with Neptune, Le Verrier turned his attention to Mercury. Mercury’s perihelion (closest approach to the Sun) exhibited a peculiar precession rate—an advance of about 5600 arcseconds per century—that could not be fully explained by known planetary perturbations. Assuming another planet must lie closer to the Sun than Mercury, Le Verrier proposed “Vulcan.” He even claimed evidence for its existence based on alleged transient phenomena observed near the Sun during eclipses. However, subsequent investigations failed to find any such planet. Why did Le Verrier err here?

– Insufficient Data: Unlike Uranus, whose anomalies were well-documented over long periods, Mercury’s precession was subtle and harder to isolate.

– Relativistic Effects: Albert Einstein later showed that Mercury’s precession arises naturally from general relativity, which modifies Newtonian gravity near massive objects like the Sun. At the time, Newtonian physics lacked the tools to describe such effects. Thus, while Newtonian mechanics triumphed with Neptune, its limitations became apparent with Vulcan.

– The Conquest of Newtonian Physics: The successful prediction of Neptune reinforced the belief that the universe operates according to deterministic laws comprehensible through mathematics. Philosophically, this had enormous implications:

– Universality of Natural Laws: If Neptune’s existence could be deduced without seeing it, then all physical phenomena might obey similar rules waiting to be uncovered.

– Determinism vs Free Will: The idea that every event has a cause rooted in prior conditions fueled debates about free will versus determinism.

– Computability and Prediction: Scientists came to see the universe as a massive machine regulated by equations that were solvable in theory, if not always in practice. However, flaws in this viewpoint later emerged. Quantum mechanics added intrinsic unpredictability, but relativity questioned absolute space and time. Even Nevertheless, Newtonian physics is still required for the majority of practical applications today.

Conclusion: The discovery of Neptune exemplifies human inventiveness and scientific reasoning. Through meticulous observations, complex mathematics, and steadfast trust in Newton’s equations, Le Verrier and Adams accomplished the seemingly impossible: forecasting a faraway world before ever seeing it. Their work highlighted the beauty and universality of classical physics, influencing millennia of cosmological thinking. It did, however, reveal our understanding’s limitations, opening doors for future physics revolutions. As we continue our inquiry into the cosmos, let us remember Neptune’s lessons—and the humility necessary to seek truth beyond the limits of our current understanding.

References:

  1. Goldstein, H., Poole, C., & Safko, J. (2002). Classical Mechanics (3rd ed.). Addison Wesley.
  2. Halliday, D., Resnick, R., & Walker, J. (2011). Fundamentals of Physics (9th ed.). Wiley.
  3. Bate, R. R., Mueller, D. D., & White, J. E. (1971). Fundamentals of Astrodynamics. Dover Publications.
  4. Khan Academy. (n.d.). AP®︎ Physics 1. Retrieved from Khan Academy.

Can Entropy Gradients Explain Forces? Revisiting a 2002 Approach to Emergent Gravity

24 Feb

Introduction

Key Contributions

  • Entropy Gradients and Forces: The 2002 article established a clear connection between entropy gradients and classical forces, providing a theoretical framework for understanding how entropy can give rise to macroscopic forces.
  • Information-Theoretic Approach: By treating entropy as a measure of information, the study demonstrated how the arrangement of particles and their energy states could be used to derive forces like the centrifugal and gravitational forces.
  • Foundations for Emergent Gravity: While the work focused on classical mechanics, the principles outlined—particularly the role of entropy gradients in generating forces—contributed to foundational concepts now recognized in later developments like emergent gravity.

Connection to Emergent Gravity

In recent years, the concept of emergent gravity has gained traction, with researchers like Erik Verlinde proposing that gravity itself is an entropic force arising from changes in information entropy. While Verlinde’s work extends these ideas to general relativity and cosmology, the foundational principles can be traced back to my 2002 paper. My work provided the first formal demonstration of how entropy gradients could give rise to forces, a concept that is central to emergent gravity.

Conclusion

It’s exciting to see how the ideas I explored in 2002 have evolved and inspired new directions in physics. While my work was framed as an “information-theoretic derivation of forces,” it laid the groundwork for what would later be called emergent gravity. I’m proud to have contributed to this foundational work and look forward to seeing how these ideas continue to develop.

About the Mechanics of the Anti-Gravity Wheel

23 Feb

The Anti-Gravity Wheel: Exploring Maxwell’s Wheel

The anti-gravity wheel, demonstrated through Maxwell’s wheel, showcases how spinning can create the illusion of reduced weight due to the conversion of potential energy to kinetic energy and the effects of downward acceleration. This phenomenon is explored through experiments that reveal the physics behind the wheel’s behavior when spinning and bouncing.

Understanding Maxwell’s Wheel

Maxwell’s wheel consists of a metal disc mounted on a rod, with strings attached to a base that supports the entire structure. This device is commonly used in educational settings to illustrate energy conversion principles. When the string is wound up and released, the wheel unravels, converting potential energy into rotational energy. As it descends, it accelerates, gaining speed, and upon reaching the bottom, it has enough rotational energy to wind back up, converting kinetic energy back into potential energy.

Energy Conversion in Action

The wheel’s motion exemplifies energy conversion:

  • Potential Energy (PE): Stored energy when the wheel is held at a height. Mathematically, it can be expressed as: PE = mgh where ( m ) is the mass, ( g ) is the acceleration due to gravity, and ( h ) is the height.
  • Kinetic Energy (KE): Energy of motion as the wheel spins and descends. It is given by: KE = \frac{1}{2}mv^2 + \frac{1}{2}I\omega^2 where ( v ) is the linear velocity, ( I ) is the moment of inertia, and ( \omega ) is the angular velocity.

The wheel continues to bounce back and forth, gradually losing energy due to friction, heat, and sound, which prevents it from returning to its original height.

The Weight Phenomenon

The most captivating aspect of the anti-gravity wheel is its behavior when weighed. Initially, the wheel is placed on a scale, which is zeroed out. As the wheel spins, an unexpected phenomenon occurs: the scale indicates a negative weight.

Observations During Spinning

When the wheel is wound up and released, the scale shows a consistent negative weight, averaging around negative six grams. This suggests that the spinning wheel effectively weighs less than its actual mass. The phenomenon is not merely a fluctuation; it indicates a significant reduction in weight, approximately one percent of the wheel’s mass.

Comparing Gyroscopic Effects

This observation raises questions about gyroscopic effects. A well-known demonstration by Eric Lathway at Imperial College shows a 40-pound wheel that feels weightless when spun. However, upon further investigation, it becomes clear that while the gyroscope may feel lighter due to its motion, it does not actually weigh less. The sensation of lightness comes from the ease of maneuvering the wheel rather than a true reduction in weight.

The Role of Acceleration

To understand why Maxwell’s wheel appears to weigh less, we must consider the principles of acceleration. When an object accelerates downward, it experiences a decrease in weight. For instance, if one were to stand on a scale and jump, the scale would register a lower weight during the fall due to the downward acceleration.

Application to Maxwell’s Wheel

In the case of Maxwell’s wheel, as it bobs up and down, it is continuously accelerating downward, even when it is rising. This downward acceleration results in a perceived reduction in weight. When the wheel is allowed to fall freely, it would weigh less by its entire mass until it hits the ground, at which point it would register its full weight again.

The Bouncing Effect

As the wheel bounces, it experiences a similar effect. The scale cannot accurately measure the rapid changes in force as the wheel hits the bottom, leading to an average weight that appears lower during its upward motion. The faster the wheel accelerates downward, the less weight is registered on the scale.

Conclusion

The anti-gravity wheel serves as a remarkable demonstration of fundamental physics principles, particularly the interplay between potential and kinetic energy, and the effects of acceleration on perceived weight. Through experiments with Maxwell’s wheel, we gain insights into how motion can alter our understanding of weight and force. This exploration not only enhances our comprehension of physics but also sparks curiosity about the intricate dynamics of motion and energy.

References

  1. Halliday, D., Resnick, R., & Walker, J. (2011). Fundamentals of Physics (9th ed.). Wiley.
  2. Serway, R. A., & Jewett, J. W. (2019). Physics for Scientists and Engineers (10th ed.). Cengage Learning.
  3. Lathway, E. Demonstration of gyroscopic effects at Imperial College.


Simon Shnoll’s Groundbreaking Insights on Measurement and Reality

21 Feb

1. Introduction

Simon El’evich Shnoll, a Russian biophysicist, spent decades investigating measurement anomalies, particularly in biochemical and physical processes. His observations suggest that random processes such as radioactive decay exhibit periodic and structured fluctuations, hinting at deep cosmophysical influences. His work challenges the fundamental assumption of measurement independence and randomness, proposing a revolutionary understanding of time and reality.

2. Early Career and Initial Discoveries

Shnoll’s journey into these anomalies began in September 1951 when he started working on a nuclear project. Despite the radioactive environment, he conducted biochemical experiments, supported by his mentors. However, what he discovered fundamentally challenged established scientific methods and interpretations.

3. The Anomaly in Measurements

During his experiments, Shnoll noticed deviations from the expected Gaussian distribution. Instead of a smooth bell curve, his data revealed structured fluctuations. The standard expectation for measurements follows: P(x) = \frac{1}{\sigma \sqrt{2\pi}} e^{-\frac{(x - \mu)^2}{2\sigma^2}},

where:

  • x represents measured values,
  • \mu is the mean,
  • \sigma is the standard deviation.

However, Shnoll found that experimental results did not consistently follow this distribution, exhibiting periodic deviations. Even after averaging multiple measurements: X = \frac{1}{N} \sum_{i=1}^{N} x_i, the fluctuations persisted, suggesting an underlying structured phenomenon.

4. The Shift in Perspective

As he continued, Shnoll realized that time played a crucial role in his measurements. He introduced the concept of “parallel probes,” where experiments were conducted under the same conditions but at different times. This method revealed that measurement distributions depended on when they were recorded, leading to: P(X, t) \neq P(X, t+\Delta t).

This finding directly contradicted conventional assumptions that measurement distributions should be time-invariant under identical conditions.

5. Parallel vs. Serial Probes

To further investigate, Shnoll systematically compared measurements taken simultaneously at different locations versus those taken sequentially in the same location. He found that parallel measurements exhibited stronger correlations than serial ones, reinforcing the idea that each moment in time has unique physical properties influencing measurement outcomes.

6. Experimental Evidence

Over 25 years, Shnoll and his team conducted thousands of experiments, measuring fluctuations in:

  • Alpha decay of 239Pu and 241Am
  • Beta decay of tritium
  • Biochemical reaction rates

Each dataset exhibited periodicity linked to external cosmophysical factors, suggesting that stochastic processes are influenced by cosmic and geophysical conditions rather than being purely random.

7. Possible Explanations

Several hypotheses attempt to explain the Shnoll Effect:

  1. Cosmic Ray Influence: Variations in cosmic ray flux due to planetary motion.
  2. Gravitational and Inertial Effects: Influences from planetary alignments and Earth’s motion.
  3. Quantum Entanglement with the Universe: Suggesting nonlocal correlations in physical processes.

Despite these hypotheses, no widely accepted theoretical framework fully explains the observed periodic structures.

8. Implications for Fundamental Physics

Shnoll’s findings challenge key assumptions in physics:

  • Randomness of Decay: If decay rates are influenced by cosmic factors, the assumption of purely stochastic behavior in quantum mechanics needs revision.
  • Time-Dependent Measurements: Measurement outcomes depend on global and cosmophysical conditions, contradicting traditional metrology principles.
  • New Perspectives in Metrology: Precision measurements in physics and chemistry may need to account for celestial influences.

9. References

  • S. E. Shnoll, Cosmophysical Factors in Stochastic Processes, American Research Press, 2009.
  • S. E. Shnoll et al., “Regular Variations of the Fine Structure of Stochastic Distributions as a Consequence of Cosmophysical Influences,” Physics – Uspekhi, 2003.
  • S. E. Shnoll et al., “Experiments with Rotating Collimators Cutting Out Pencil of Alpha-Particles at Radioactive Decay of Pu-239 Evidence Sharp Anisotropy of Space,” arXiv preprint, 2005.

10. Conclusion: A New Worldview

Simon Shnoll’s research leads to a radical shift in our perception of time and measurement. His insistence that every moment has unique physical properties challenges the very foundation of scientific inquiry. As he reflects on his life’s work, Shnoll encourages scientists to remain open to revolutionary ideas that redefine our understanding of the universe.

His findings suggest that stochastic processes may be deeply entangled with the cosmic fabric, urging a reconsideration of randomness, measurement, and time in the broader context of physical reality.

Action-Reaction Law and Surface Forces in the Pop-Pop Boat Experiment

6 Jan

One of the most intriguing aspects of the pop-pop boat experiment, as demonstrated in the video, is its relation to Newton’s Third Law of Motion, often stated as:

“For every action, there is an equal and opposite reaction.”

This fundamental principle provides the foundation for understanding how the motion of the boat is generated. However, in the context of the experiment in this video, the application of the law becomes nuanced due to the influence of surface forces and fluid dynamics.


Action-Reaction Law in Pop-Pop Boats

In the simplest interpretation:

  1. Action: Water is expelled through the exhaust tubes due to the pressure generated by the expanding steam.
  2. Reaction: The boat moves forward as a result of the backward momentum of the expelled water.

This explains the forward motion of the boat during the expulsion phase. However, during the suction phase, water is drawn back into the tubes, which seemingly should cancel out the forward momentum. Yet, the boat still moves forward. Why?

Lord Munchausen defies physics: pulling his own boat forward in a whimsical battle against Newton’s Third Law, proving that imagination knows no limits!

The Role of Surface Forces

To understand this apparent contradiction, we must consider surface forces and momentum interactions within the system:

  1. Differential Momentum Exchange:
    • During water expulsion, the expelled water exerts momentum directly against the boat’s frame, propelling it forward.
    • During suction, water is drawn from all directions, and the opposing force is distributed over a larger area, resulting in weaker reverse momentum.
  2. Collision and Energy Dissipation:
    • As water is sucked back into the tubes, it collides with internal air pockets and the tube walls, dissipating energy. This results in a partial cancellation of the reverse momentum.
  3. Surface Interaction:
    • The tubes and the surrounding water interface act as a boundary where surface tension and viscosity influence fluid flow. These forces dampen the backward momentum during suction, further enhancing net forward motion.

Apparent Effects of Surface Forces

Surface forces also contribute to the efficiency of propulsion in several ways:

  • Collimation of Jet Streams: During expulsion, water exits the tubes in a more directed stream, producing a concentrated reaction force that maximizes forward motion.
  • Damping of Suction Dynamics: Surface tension and viscous drag smooth out oscillations, minimizing reverse momentum effects.
  • Stability and Directionality: Surface interactions stabilize the boat’s movement, preventing significant side-to-side oscillations that could waste energy.

Reconciling Theory and Observation

The experiment’s findings challenge simplified interpretations of action-reaction dynamics. By isolating the fluid interactions in the transparent boat, the video highlights that resonance and internal system dynamics dominate over simple expulsion-suction symmetry. This emphasizes the need to view the boat as a system where:

  • Net forward motion arises from asymmetric forces during the oscillatory cycle.
  • Momentum exchange within the tubes is modified by interactions at the gas-liquid boundary, including condensation effects and surface forces.

Broader Implications

The discussion of Newton’s Third Law in this context extends beyond pop-pop boats:

  • Fluid Propulsion Systems: Similar principles apply to jet engines and rockets, where nozzle design and fluid dynamics optimize thrust.
  • Heat Engines: The role of surface forces and resonance highlights the complexity of thermodynamic systems.
  • Biological Systems: Nature leverages asymmetric action-reaction mechanics in swimming organisms, where surface forces enhance propulsion efficiency.

The diagram and explanation together highlight the elegant interplay of forces and thermodynamics driving the boat.

Conclusion

While the pop-pop boat seems simple at first glance, its operation beautifully demonstrates the interplay between action-reaction forces, surface dynamics, and resonance. The experiment not only validates Newton’s Third Law (does it?^* ) but also sheds light on the subtle effects of fluid mechanics and energy dissipation, offering a richer understanding of motion in oscillatory systems. This discussion underscores the importance of revisiting fundamental laws in light of experimental nuances.

REF: https://www.youtube.com/watch?v=3AXupc7oE-g&list=LL&index=39

* The operation of the pop-pop boat provides a nuanced demonstration of Newton’s Third Law in conjunction with other physical principles, such as resonance and energy dissipation. While the experiment showcases the principle of action-reaction, it doesn’t rely solely on it to explain the net motion. The net forward motion results from asymmetric interactions and energy dissipation rather than a perfect pair of equal and opposite forces. Specifically:

  • Energy Loss and Momentum Cancellation:
    • During the suction phase, the reverse force imparted to the boat is partially canceled by energy dissipation (e.g., collision of water with air inside the tubes).
    • This leaves the forward expulsion force uncompensated, allowing the boat to move forward.
  • Surface Forces:
    • Viscosity, surface tension, and friction in the water-tube system introduce non-Newtonian effects, modifying the symmetry of action and reaction.

Thus, while the Third Law operates locally at every interaction point (e.g., between steam and water, or water and tube), the system as a whole relies on additional phenomena to achieve net forward motion.

Educational Article: Knowing How Electricity Moves Through Circuits

2 Jan

Because of how quickly it works, electricity—a vital component in our modern world—is frequently difficult to understand. We explore the intriguing dynamics of electric waves and currents in the film “Watch Electricity Hit a Fork in the Road at Half a Billion Frames Per Second,” which captures this invisible occurrence at an incredible speed.

The High-Speed View of Electricity
The topic of speed comes up frequently while talking about energy. High-speed cameras and specially made twisted pair wires are used in the movie to slow down the electricity’s flow and record its movement and behaviour in real time. The novel method aids in demythologising abstract ideas such as resistance, voltage, and current. These results provide insight into the distinct dynamics of electron drift and electric wave propagation by revealing their slight yet crucial differences.

Propagation of Electrical Waves
The illustration of electric waves moving via a wire is one of the main points. Waves move down the circuit when a current is introduced, dividing at forks, reflecting back from dead ends, and eventually stabilising. This is not a quick process. For instance:

An electric signal travels 500 nanoseconds across a 23-meter cable.
The circuit stabilises and conforms to Ohm’s Law in about 4,000 nanoseconds, or eight round trips.
Important information about how electricity “chooses” its path can be gained from the way these waves interact with circuit components.

The Water Channel Analogy
The creative individual that made the video in discussion employs a water channel model to make these ideas understandable. Electron flow over wires is simulated by water flowing through small channels. Even though the real dynamics of electrons are different, this example makes it easier to see how voltage and current function in a circuit. Water, for example, has inertia because of its mass, but electrons interact with magnetic and electric fields to create a special form of “inertia.”

The model does have several drawbacks, though. Although it accurately depicts how waves propagate and reflect, some subtleties—such as the absence of magnetic effects in water—show where the comparison deviates from reality.

Observing Electrons in Action
Because electrons are invisible, researching electricity is the most difficult subject. Electric signals frequently travel near the speed of light, hence high-speed cameras are unsuitable. In order to get around this, the video uses oscilloscopes, which can monitor voltage changes with millisecond accuracy, in conjunction with animations that depict the motion of electrons.

With this careful arrangement, we observe: i) Along cables, voltage dips and spikes; ii) reflections brought on by circuit endpoints that are open or closed; iii) waves that finally level out and control the water flow.

Insights from the Trial: This video highlights the fact that electricity does not “know” the best course of action right away. Rather, waves bounce through the circuit in a trial-and-error process until the current stabilises. These waves are essential for comprehending the behaviour of circuits in various combinations.

Key takeaways include:

Electrical waves and electron motion are distinct but interdependent phenomena.
Reflections and splits in circuits play a vital role in stabilization.
Analogies, while helpful, must be used cautiously to avoid oversimplifications.


A Philosophical Question

Because electricity is subject to intricate, exact, and incredibly effective physical laws, it can exhibit seemingly intelligent behaviour. Although these behaviours may be compared to intelligence when they are demonstrated in experiments, they are essentially based on natural principles and lack any inherent consciousness or ability to make decisions. Or is there some intelligence in an electric wire?

Conclusion
The film closes the gap between theoretical ideas and real-world comprehension by documenting the passage of electricity at previously unheard-of speeds. It draws attention to how intricate electrical phenomena are and how cleverly science can uncover the invisible world. This investigation stimulates interest and a greater understanding of the factors that drive our daily existence, regardless of whether you are a student or an enthusiast.

Link to the video: https://www.youtube.com/watch?v=2AXv49dDQJw