Condividi tramite


tra

Controllare se il valore della colonna è compreso tra limiti inferiori e superiori (inclusi).

Sintassi

between(lowerBound, upperBound)

Parametri

Parametro Tipo Descrizione
lowerBound value o Column Valore con limite inferiore
upperBound value o Column Valore limite superiore

Restituzioni

Colonna (booleano)

Examples

Uso di tra con valori interi:

df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], ["age", "name"])
df.select(df.name, df.age.between(2, 4)).show()
# +-----+---------------------------+
# | name|((age >= 2) AND (age <= 4))|
# +-----+---------------------------+
# |Alice|                       true|
# |  Bob|                      false|
# +-----+---------------------------+

Uso di tra con valori stringa:

df = spark.createDataFrame([("Alice", "A"), ("Bob", "B")], ["name", "initial"])
df.select(df.name, df.initial.between("A", "B")).show()
# +-----+-----------------------------------+
# | name|((initial >= A) AND (initial <= B))|
# +-----+-----------------------------------+
# |Alice|                               true|
# |  Bob|                               true|
# +-----+-----------------------------------+

Uso di tra con valori float:

df = spark.createDataFrame(
    [(2.5, "Alice"), (5.5, "Bob")], ["height", "name"])
df.select(df.name, df.height.between(2.0, 5.0)).show()
# +-----+-------------------------------------+
# | name|((height >= 2.0) AND (height <= 5.0))|
# +-----+-------------------------------------+
# |Alice|                                 true|
# |  Bob|                                false|
# +-----+-------------------------------------+

Uso di tra con valori di data:

import pyspark.sql.functions as sf
df = spark.createDataFrame(
    [("Alice", "2023-01-01"), ("Bob", "2023-02-01")], ["name", "date"])
df = df.withColumn("date", sf.to_date(df.date))
df.select(df.name, df.date.between("2023-01-01", "2023-01-15")).show()
# +-----+-----------------------------------------------+
# | name|((date >= 2023-01-01) AND (date <= 2023-01-15))|
# +-----+-----------------------------------------------+
# |Alice|                                           true|
# |  Bob|                                          false|
# +-----+-----------------------------------------------+

Uso di tra con valori timestamp:

import pyspark.sql.functions as sf
df = spark.createDataFrame(
    [("Alice", "2023-01-01 10:00:00"), ("Bob", "2023-02-01 10:00:00")],
    schema=["name", "timestamp"])
df = df.withColumn("timestamp", sf.to_timestamp(df.timestamp))
df.select(df.name, df.timestamp.between("2023-01-01", "2023-02-01")).show()
# +-----+---------------------------------------------------------+
# | name|((timestamp >= 2023-01-01) AND (timestamp <= 2023-02-01))|
# +-----+---------------------------------------------------------+
# |Alice|                                                     true|
# |  Bob|                                                    false|
# +-----+---------------------------------------------------------+