Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Comments on How to get conditional running cumulative sum based on current row and previous rows?
Post
How to get conditional running cumulative sum based on current row and previous rows?
How do I perform a running cumulative sum that is based on a condition involving the current row and previous rows?
Given the following table:
acc | value | threshold
3 | 1 | 1
1 | 2 | 2
2 | 3 | 2
I would like to find the cumulative sum of acc
if value >= threshold
, for all value
s from the start to the current row. The expected output should be 3, 1, 3
.
That is, the equivalent python code might look like:
for i in len(df):
for j in range(i):
if df[j].value >= df[i].threshold:
df[i].cumsum += df[j].value
I tried using a windowed sum:
import pyspark.sql.functions as F
from pyspark.sql.window import Window
df = spark.createDataFrame([(3, 1, 1), (1, 2, 2), (2, 3, 2)], ["acc", "value", "threshold"])
window = Window.rowsBetween(Window.unboundedPreceding, Window.currentRow)
display(df.withColumn("output", F.sum(F.when(F.col("value") >= F.col("threshold"), F.col("acc"))).over(window)))
But this gave 3, 4, 6
, because it was comparing against the same threshold
on each row.
3 comment threads