User Tools

Site Tools


amc:ss2025:group-a:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
amc:ss2025:group-a:start [2025/07/29 14:55] – [Pin Assignments] 35120_students.hsrwamc:ss2025:group-a:start [2025/07/29 15:11] (current) – [Data analysis] 35120_students.hsrw
Line 798: Line 798:
 ==== Data analysis ==== ==== Data analysis ====
  
 +**Data Cleaning:**
 +Selection of a segment of interest, discarding outliers or unreliable data.
 +Due some hardware limitations, the reading of the marks was not reliable enough, therefore the data had to be conditioned manually.
 +The expected data is 12 marks readings, but due to problems in the data acquisition there were just 8 marks usable, and there were also some duplicated readings, this was determined manually based on the time and pattern expected.
 +
 +**Path Segmentation:**
 +getPath(df) constructs segments ("paths") marked by zero in the zone_1 column, grouping each set of four zeros as a new path.
 +
 +**Speed Calculation:**
 +mean_speed(data) estimates the average speed between time marks by measuring intervals between zeros.
 +
 +  * After interpolating sensor data along the location axis (temporal/spatial sequence), each 8×8 frame is upscaled to 64×64 pixels using cubic spline interpolation (zoom with order=3).
 +
 +  * This spatial interpolation significantly enhances intra-frame resolution.
 +
 +  * The aggregation step then merges these larger frames horizontally with value averaging over overlapping columns, preserving continuity.
 +
 +  * The plot displays a much higher-resolution heatmap representing the sensor data over the scanned path.
 +
 +<code python>
 +import pandas as pd
 +import matplotlib.pyplot as plt
 +import numpy as np
 +from scipy.ndimage import zoom
 +from scipy.interpolate import interp1d
 +
 +fname1 = "vl53l8cx_data_third_test.csv"
 +df = pd.read_csv(fname1)
 +#Since the marks acquisition is not reliable enough, the data has to be treated manually to discard unuseful data
 +dataFrame = df.loc[1419:2618].drop(2360)
 +
 +def getPath(df):
 +    idx = list(df.loc[df["zone_1"]==0].index)
 +    path_list = []
 +    for i in range(len(idx)):
 +        if (i+1)%4 == 0 and i != 0:
 +            path_list.append(df.loc[idx[i-3]:idx[i]])
 +            #print(f"{i} : [{idx[i-3]}:{idx[i]}]")
 +    return path_list
 +
 +def mean_speed(data):
 +    idx = list(data.loc[data["zone_1"]==0].index)
 +    t1 = data.loc[idx[1]]["timestamp_us"] - data.loc[idx[0]]["timestamp_us"]
 +    t2 = data.loc[idx[2]]["timestamp_us"] - data.loc[idx[1]]["timestamp_us"]
 +    t3 = data.loc[idx[3]]["timestamp_us"] - data.loc[idx[2]]["timestamp_us"]
 +    spd1 = 200/t1
 +    spd2 = 1000/t2
 +    spd3 = 1000/t3
 +    return (spd1+spd2+spd3)/3
 +
 +paths = getPath(dataFrame)
 +
 +# 1. Calculate continuous locations and concatenate all paths as before:
 +for i in range(len(paths)):
 +    loc = (paths[i]["timestamp_us"] - paths[i].iloc[0]["timestamp_us"]) * mean_speed(paths[i])
 +    paths[i] = pd.concat([paths[i], loc.to_frame('location')], axis=1)
 +    paths[i] = paths[i].drop(list(paths[i].loc[paths[i]["zone_1"] == 0].index))
 +
 +path_t = pd.concat(paths)
 +path_t = path_t.sort_values("location")
 +
 +locations = path_t['location'].values
 +data_values = path_t.iloc[:, 1:-1].values  # Adjust indices if your columns differ
 +
 +# 2. Interpolate sensor columns independently on a uniform location grid:
 +min_loc, max_loc = np.min(locations), np.max(locations)
 +num_interp_points = int(np.ceil(max_loc - min_loc)) + 1
 +interp_locations = np.linspace(min_loc, max_loc, num_interp_points)
 +
 +interp_data = np.zeros((num_interp_points, data_values.shape[1]))
 +for col in range(data_values.shape[1]):
 +    interp_func = interp1d(locations, data_values[:, col], kind='linear', fill_value='extrapolate')
 +    interp_data[:, col] = interp_func(interp_locations)
 +
 +# 3. Reshape each row into 8x8 frames:
 +num_frames = interp_data.shape[0]
 +frame_height, frame_width = 8, 8
 +frames_8x8 = [interp_data[i].reshape(frame_height, frame_width) for i in range(num_frames)]
 +
 +# 4. Interpolate each 8x8 frame to 64x64 using scipy.ndimage.zoom:
 +zoom_factor = 64 / 8  # 8x to 64x scaling
 +
 +frames_64x64 = [zoom(frame, zoom_factor, order=3) for frame in frames_8x8]  # cubic spline interpolation (order=3)
 +
 +# 5. Aggregate frames horizontally with averaging over overlaps (same as before):
 +max_offset = num_frames - 1
 +final_width = max_offset + 64  # width after scaling frames to 64 wide
 +final_frame_64 = np.zeros((64, final_width))
 +count_64 = np.zeros((64, final_width))
 +
 +for i, frame in enumerate(frames_64x64):
 +    offset = i
 +    final_frame_64[:, offset:offset+64] += frame
 +    count_64[:, offset:offset+64] += 1
 +
 +aggregated_64 = np.divide(final_frame_64, count_64, out=np.zeros_like(final_frame_64), where=count_64 != 0)
 +
 +# 6. Plot the aggregated 64x wide frame:
 +plt.figure(figsize=(final_width / 16, 8))  # Adjust size for clarity
 +plt.imshow(aggregated_64, cmap='viridis', aspect='auto')
 +plt.colorbar(label='Value')
 +plt.title("Aggregated Large Frame with 8x8 to 64x64 Spatial Interpolation")
 +plt.xlabel('Columns (scaled)')
 +plt.ylabel('Rows (scaled)')
 +plt.show()
 +</code>
 +
 +{{ :amc:ss2025:group-a:screenshot_2025-07-29_142824.png?800 |Map of the path of the Gießwagen}}
 ===== Discussion ===== ===== Discussion =====
  
amc/ss2025/group-a/start.1753793727.txt.gz · Last modified: 2025/07/29 14:55 by 35120_students.hsrw